modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 12:31:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 12:31:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fedbor/13bLLAma2_completeV2_ep1_cut2048_augm
|
fedbor
| 2023-10-09T18:11:17Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-10-09T18:11:15Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
pepoo20/lora_small_kaggle_1100
|
pepoo20
| 2023-10-09T17:59:21Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"region:us"
] | null | 2023-10-09T17:59:20Z |
---
library_name: peft
base_model: openai/whisper-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Yntec/DreamFulV2
|
Yntec
| 2023-10-09T17:53:31Z | 353 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Portrait",
"3D",
"Hius",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-09T16:55:48Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Portrait
- 3D
- Hius
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# DreamFul V2
Original page: https://civitai.com/models/17754?modelVersionId=33141
Samples and prompt:


A very tennis girl, long wavy hair, blue detailed eyes, miniskirt, sky, mid - shot, short smile, cinematic wallpaper, highly detailed by stanley artgerm lau. ( ( ( full round face ) ) )
|
Mtc2/ppo-LunarLander-v2
|
Mtc2
| 2023-10-09T17:51:41Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-06-11T19:35:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -168.20 +/- 102.01
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Mtc2/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
StKirill/atari_dan
|
StKirill
| 2023-10-09T17:50:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T17:50:00Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 135.50 +/- 122.59
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StKirill -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StKirill -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga StKirill
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jwhedbee/lora-trained-xl
|
jwhedbee
| 2023-10-09T17:45:42Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-09T16:43:52Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jwhedbee/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
endlex/Research
|
endlex
| 2023-10-09T17:44:23Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-10-09T17:44:23Z |
---
license: bigscience-openrail-m
---
|
kporzycki/Reinforce-cart-pole
|
kporzycki
| 2023-10-09T17:29:16Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T10:37:07Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cart-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
m-aliabbas1/distilt_bert_29_mva_intents
|
m-aliabbas1
| 2023-10-09T17:28:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T17:11:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilt_bert_29_mva_intents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilt_bert_29_mva_intents
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2578
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 406 | 0.3952 | 0.9094 |
| 0.785 | 2.0 | 812 | 0.2136 | 0.9357 |
| 0.1929 | 3.0 | 1218 | 0.2459 | 0.9415 |
| 0.1016 | 4.0 | 1624 | 0.2041 | 0.9591 |
| 0.0631 | 5.0 | 2030 | 0.2268 | 0.9561 |
| 0.0631 | 6.0 | 2436 | 0.2304 | 0.9561 |
| 0.0391 | 7.0 | 2842 | 0.2425 | 0.9503 |
| 0.0246 | 8.0 | 3248 | 0.2594 | 0.9474 |
| 0.0239 | 9.0 | 3654 | 0.2522 | 0.9532 |
| 0.0171 | 10.0 | 4060 | 0.2578 | 0.9561 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
naphatmanu/sdxl-lora-index-modern-1
|
naphatmanu
| 2023-10-09T17:24:04Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-09T14:39:35Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Index Modern
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - naphatmanu/sdxl-lora-index-modern-1
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on Index Modern using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
forestai/fireact_codellama_34b_lora
|
forestai
| 2023-10-09T17:22:35Z | 5 | 1 |
peft
|
[
"peft",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2023-10-05T20:22:12Z |
---
license: mit
library_name: peft
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
**Website**: [FireAct Agent](https://fireact-agent.github.io)
# **FireAct Llama-2/CodeLlama**
FireAct Llama/CodeLlama is a collection of fine-tuned generative text models for performing ReAct with external search tools. Links to other models can be found in the Index section.
## Foundation Model Details
*Note: As the foundation models, Llama-2 and CodeLlama, are developed by Meta, please also read the guidance and license on their website, [Llama-2](https://huggingface.co/meta-llama) and [CodeLlama](https://huggingface.co/codellama), before using FireAct models.*
**Model Developers** System 2 Research, Cambridge LTL, Monash University, Princeton PLI.
**Variations** FireAct models including Llama-2-7B full fine-tuned models, and Llama-2-[7B,13B], CodeLlama-[7B,13B,34B] LoRA fine-tuned models. All released models are fine-tuned on multi-task (HotpotQA/StrategyQA/MMLU) and multi-type (ReAct/CoT/Reflexion) data.
**Input** Models input text only.
**Output** Models generate text only.
## Index
**Full Fine-tuned Model**
FireAct Llama-2:
- [fireact_llama_2_7b](https://huggingface.co/forestai/fireact_llama_2_7b)
**LoRA Fine-tuned Model**
FireAct Llama-2:
- [fireact_llama_2_7b_lora](https://huggingface.co/forestai/fireact_llama_2_7b_lora)
- [fireact_llama_2_13b_lora](https://huggingface.co/forestai/fireact_llama_2_13b_lora)
FireAct CodeLlama:
- [fireact_codellama_7b_lora](https://huggingface.co/forestai/fireact_codellama_7b_lora)
- [fireact_codellama_13b_lora](https://huggingface.co/forestai/fireact_codellama_13b_lora)
- [fireact_codellama_34b_lora](https://huggingface.co/forestai/fireact_codellama_34b_lora)
## LoRA Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
forestai/fireact_codellama_7b_lora
|
forestai
| 2023-10-09T17:22:12Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2023-10-05T20:21:34Z |
---
license: mit
library_name: peft
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
**Website**: [FireAct Agent](https://fireact-agent.github.io)
# **FireAct Llama-2/CodeLlama**
FireAct Llama/CodeLlama is a collection of fine-tuned generative text models for performing ReAct with external search tools. Links to other models can be found in the Index section.
## Foundation Model Details
*Note: As the foundation models, Llama-2 and CodeLlama, are developed by Meta, please also read the guidance and license on their website, [Llama-2](https://huggingface.co/meta-llama) and [CodeLlama](https://huggingface.co/codellama), before using FireAct models.*
**Model Developers** System 2 Research, Cambridge LTL, Monash University, Princeton PLI.
**Variations** FireAct models including Llama-2-7B full fine-tuned models, and Llama-2-[7B,13B], CodeLlama-[7B,13B,34B] LoRA fine-tuned models. All released models are fine-tuned on multi-task (HotpotQA/StrategyQA/MMLU) and multi-type (ReAct/CoT/Reflexion) data.
**Input** Models input text only.
**Output** Models generate text only.
## Index
**Full Fine-tuned Model**
FireAct Llama-2:
- [fireact_llama_2_7b](https://huggingface.co/forestai/fireact_llama_2_7b)
**LoRA Fine-tuned Model**
FireAct Llama-2:
- [fireact_llama_2_7b_lora](https://huggingface.co/forestai/fireact_llama_2_7b_lora)
- [fireact_llama_2_13b_lora](https://huggingface.co/forestai/fireact_llama_2_13b_lora)
FireAct CodeLlama:
- [fireact_codellama_7b_lora](https://huggingface.co/forestai/fireact_codellama_7b_lora)
- [fireact_codellama_13b_lora](https://huggingface.co/forestai/fireact_codellama_13b_lora)
- [fireact_codellama_34b_lora](https://huggingface.co/forestai/fireact_codellama_34b_lora)
## LoRA Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
forestai/fireact_llama_2_7b_lora
|
forestai
| 2023-10-09T17:21:51Z | 3 | 1 |
peft
|
[
"peft",
"tensorboard",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2023-10-05T20:19:21Z |
---
license: mit
library_name: peft
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
**Website**: [FireAct Agent](https://fireact-agent.github.io)
# **FireAct Llama-2/CodeLlama**
FireAct Llama/CodeLlama is a collection of fine-tuned generative text models for performing ReAct with external search tools. Links to other models can be found in the Index section.
## Foundation Model Details
*Note: As the foundation models, Llama-2 and CodeLlama, are developed by Meta, please also read the guidance and license on their website, [Llama-2](https://huggingface.co/meta-llama) and [CodeLlama](https://huggingface.co/codellama), before using FireAct models.*
**Model Developers** System 2 Research, Cambridge LTL, Monash University, Princeton PLI.
**Variations** FireAct models including Llama-2-7B full fine-tuned models, and Llama-2-[7B,13B], CodeLlama-[7B,13B,34B] LoRA fine-tuned models. All released models are fine-tuned on multi-task (HotpotQA/StrategyQA/MMLU) and multi-type (ReAct/CoT/Reflexion) data.
**Input** Models input text only.
**Output** Models generate text only.
## Index
**Full Fine-tuned Model**
FireAct Llama-2:
- [fireact_llama_2_7b](https://huggingface.co/forestai/fireact_llama_2_7b)
**LoRA Fine-tuned Model**
FireAct Llama-2:
- [fireact_llama_2_7b_lora](https://huggingface.co/forestai/fireact_llama_2_7b_lora)
- [fireact_llama_2_13b_lora](https://huggingface.co/forestai/fireact_llama_2_13b_lora)
FireAct CodeLlama:
- [fireact_codellama_7b_lora](https://huggingface.co/forestai/fireact_codellama_7b_lora)
- [fireact_codellama_13b_lora](https://huggingface.co/forestai/fireact_codellama_13b_lora)
- [fireact_codellama_34b_lora](https://huggingface.co/forestai/fireact_codellama_34b_lora)
## LoRA Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Undi95/Mistral-11B-CC-Air-GGUF
|
Undi95
| 2023-10-09T17:16:27Z | 18 | 0 | null |
[
"gguf",
"mistral",
"pretrained",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-09T17:04:32Z |
---
license: apache-2.0
tags:
- mistral
- pretrained
---
CollectiveCognition-v1.1-Mistral-7B and airoboros-mistral2.2-7b glued together.
<!-- description start -->
## Description
This repo contains quantized files of Mistral-11B-CC-Air.
<!-- description end -->
<!-- description start -->
## Model used
- [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
- [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b/)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca or default
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
```
USER: <prompt>
ASSISTANT:
```
## The secret sauce
```
slices:
- sources:
- model: teknium/CollectiveCognition-v1.1-Mistral-7B
layer_range: [0, 24]
- sources:
- model: teknium/airoboros-mistral2.2-7b
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
```
Special thanks to Sushi.
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
ahaedike/a2c-PandaReachDense-v3
|
ahaedike
| 2023-10-09T17:15:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T17:10:12Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pepoo20/lora_small
|
pepoo20
| 2023-10-09T17:13:08Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"region:us"
] | null | 2023-10-09T17:13:04Z |
---
library_name: peft
base_model: openai/whisper-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
LilyNgo/lora_Galaxiga-trained-xl-colab
|
LilyNgo
| 2023-10-09T17:06:57Z | 3 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-09T10:18:39Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of Galaxiga battle aircraft
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LilyNgo/lora_Galaxiga-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of Galaxiga battle aircraft using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Ilay777123/Politrees_RVC_V2_Huggingface_Version
|
Ilay777123
| 2023-10-09T16:55:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-09T16:53:51Z |
<script
type="module"
src="https://gradio.s3-us-west-2.amazonaws.com/3.42.0/gradio.js"
></script>
<gradio-app src="https://politrees-rvc-v2-huggingface-version.hf.space"></gradio-app>
<iframe
src="https://politrees-rvc-v2-huggingface-version.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
https://politrees-rvc-v2-huggingface-version.hf.space
|
sharkMeow/bert-base-chinese-finetuned-swag
|
sharkMeow
| 2023-10-09T16:51:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:ckiplab/bert-base-chinese",
"base_model:finetune:ckiplab/bert-base-chinese",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-10-09T14:28:36Z |
---
license: gpl-3.0
base_model: ckiplab/bert-base-chinese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-swag
This model is a fine-tuned version of [ckiplab/bert-base-chinese](https://huggingface.co/ckiplab/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2704
- Accuracy: 0.9525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2559 | 1.0 | 10857 | 0.2704 | 0.9525 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
GGital/Fine-Tune_Eng-01
|
GGital
| 2023-10-09T16:25:47Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:hpcai-tech/Colossal-LLaMA-2-7b-base",
"base_model:adapter:hpcai-tech/Colossal-LLaMA-2-7b-base",
"region:us"
] | null | 2023-10-09T16:19:36Z |
---
library_name: peft
base_model: hpcai-tech/Colossal-LLaMA-2-7b-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Frank0930/swin-tiny-patch4-window7-224-finetuned-eurosat
|
Frank0930
| 2023-10-09T16:15:55Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-09T15:18:19Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9726
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0864
- Accuracy: 0.9726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4809 | 1.0 | 351 | 0.1388 | 0.9528 |
| 0.3489 | 2.0 | 703 | 0.0945 | 0.9692 |
| 0.3528 | 2.99 | 1053 | 0.0864 | 0.9726 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Varosa/whisper-onnx
|
Varosa
| 2023-10-09T16:14:46Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"region:us"
] | null | 2023-10-09T08:31:18Z |
---
library_name: "transformers.js"
---
https://huggingface.co/openai/whisper-medium with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
xiaol/RWKV-claude-for-mobile-v4-world-1.5B-16k
|
xiaol
| 2023-10-09T16:06:35Z | 0 | 13 | null |
[
"dataset:PygmalionAI/PIPPA",
"dataset:larryvrh/PIPPA-TavernFormat",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:Norquinal/claude_evol_instruct_100k",
"dataset:ehartford/dolphin",
"license:apache-2.0",
"region:us"
] | null | 2023-09-07T01:26:56Z |
---
license: apache-2.0
datasets:
- PygmalionAI/PIPPA
- larryvrh/PIPPA-TavernFormat
- Norquinal/claude_multiround_chat_30k
- Norquinal/claude_evol_instruct_100k
- ehartford/dolphin
---
## RWKV 1.5B for android and iOS
focus on Role play
<strike>should wait for RWKV runner updated to test this model, this model need select customized vocabs file</strike>
Runner have updated to be able using selected tokenzier files

use following txt file to inference and
https://huggingface.co/xiaol/RWKV-claude-for-mobile-v4-world-1.5B-16k/blob/main/rwkv_vocab_v20230424.txt
and use preset to change names to user and assistant which finetuned with this start prefix format
details in this image:

more detail here:https://huggingface.co/xiaol/RWKV-paper-reviewer-and-reading-comprehension-wenda-Worldv4-7B-16k
## or you can directly use it on phone make it as your own assistant
https://github.com/daquexian/faster-rwkv


temp 0.7 topp 0.4


## Training details

|
m-aliabbas1/tiny_bert_29_mva_intents
|
m-aliabbas1
| 2023-10-09T15:58:28Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T15:52:25Z |
---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tiny_bert_29_mva_intents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_bert_29_mva_intents
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3517
- Accuracy: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 406 | 2.9779 | 0.3421 |
| 3.1383 | 2.0 | 812 | 2.6427 | 0.4474 |
| 2.743 | 3.0 | 1218 | 2.3476 | 0.5439 |
| 2.3913 | 4.0 | 1624 | 2.0961 | 0.6082 |
| 2.0764 | 5.0 | 2030 | 1.8721 | 0.6550 |
| 2.0764 | 6.0 | 2436 | 1.6858 | 0.6696 |
| 1.8321 | 7.0 | 2842 | 1.5348 | 0.6959 |
| 1.6047 | 8.0 | 3248 | 1.3980 | 0.7281 |
| 1.439 | 9.0 | 3654 | 1.2846 | 0.7544 |
| 1.2773 | 10.0 | 4060 | 1.1894 | 0.7632 |
| 1.2773 | 11.0 | 4466 | 1.0994 | 0.7661 |
| 1.1596 | 12.0 | 4872 | 1.0221 | 0.7982 |
| 1.0563 | 13.0 | 5278 | 0.9486 | 0.8158 |
| 0.962 | 14.0 | 5684 | 0.8931 | 0.8275 |
| 0.8814 | 15.0 | 6090 | 0.8328 | 0.8363 |
| 0.8814 | 16.0 | 6496 | 0.7830 | 0.8392 |
| 0.8114 | 17.0 | 6902 | 0.7394 | 0.8596 |
| 0.7515 | 18.0 | 7308 | 0.6997 | 0.8684 |
| 0.6936 | 19.0 | 7714 | 0.6659 | 0.8860 |
| 0.6491 | 20.0 | 8120 | 0.6316 | 0.8801 |
| 0.614 | 21.0 | 8526 | 0.6014 | 0.8860 |
| 0.614 | 22.0 | 8932 | 0.5799 | 0.8947 |
| 0.5693 | 23.0 | 9338 | 0.5529 | 0.8918 |
| 0.5362 | 24.0 | 9744 | 0.5368 | 0.9035 |
| 0.5026 | 25.0 | 10150 | 0.5137 | 0.9006 |
| 0.4733 | 26.0 | 10556 | 0.4944 | 0.9035 |
| 0.4733 | 27.0 | 10962 | 0.4807 | 0.9006 |
| 0.4582 | 28.0 | 11368 | 0.4646 | 0.9152 |
| 0.4276 | 29.0 | 11774 | 0.4536 | 0.9064 |
| 0.4047 | 30.0 | 12180 | 0.4377 | 0.9123 |
| 0.3941 | 31.0 | 12586 | 0.4286 | 0.9094 |
| 0.3941 | 32.0 | 12992 | 0.4188 | 0.9123 |
| 0.3741 | 33.0 | 13398 | 0.4076 | 0.9123 |
| 0.3613 | 34.0 | 13804 | 0.4024 | 0.9094 |
| 0.3483 | 35.0 | 14210 | 0.3961 | 0.9094 |
| 0.3437 | 36.0 | 14616 | 0.3918 | 0.9094 |
| 0.3266 | 37.0 | 15022 | 0.3830 | 0.9094 |
| 0.3266 | 38.0 | 15428 | 0.3770 | 0.9152 |
| 0.3216 | 39.0 | 15834 | 0.3746 | 0.9123 |
| 0.3042 | 40.0 | 16240 | 0.3704 | 0.9211 |
| 0.3052 | 41.0 | 16646 | 0.3660 | 0.9211 |
| 0.2962 | 42.0 | 17052 | 0.3655 | 0.9181 |
| 0.2962 | 43.0 | 17458 | 0.3604 | 0.9123 |
| 0.2912 | 44.0 | 17864 | 0.3584 | 0.9152 |
| 0.2909 | 45.0 | 18270 | 0.3557 | 0.9152 |
| 0.2854 | 46.0 | 18676 | 0.3546 | 0.9181 |
| 0.2829 | 47.0 | 19082 | 0.3548 | 0.9152 |
| 0.2829 | 48.0 | 19488 | 0.3530 | 0.9181 |
| 0.278 | 49.0 | 19894 | 0.3519 | 0.9211 |
| 0.2827 | 50.0 | 20300 | 0.3517 | 0.9211 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
iandennismiller/LLama-2-MedText-13b-GGUF
|
iandennismiller
| 2023-10-09T15:44:30Z | 136 | 4 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"medical",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-09-16T21:28:19Z |
---
base_model: https://huggingface.co/truehealth/LLama-2-MedText-13b
inference: false
license: cc-by-nc-4.0
model_creator: truehealth
model_name: LLama 2 MedText 13b
model_type: llama
prompt_template: '[INST]{prompt}[\INST]'
quantized_by: iandennismiller
pipeline_tag: text-generation
tags:
- medical
---
# LLama-2-MedText-13b-GGUF
Quantized GGUF of https://huggingface.co/truehealth/LLama-2-MedText-13b
## Usage
Interactive [llama.cpp](https://github.com/ggerganov/llama.cpp/) session:
```bash
llama-cpp \
--instruct \
--color \
--in-prefix "[INST] " \
--in-suffix "[\INST] " \
--model LLama-2-MedText-13b-q8_0.gguf
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMa.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
> [INST] How confident are you in your knowledge and abilities?
[\INST] [RSP] As an AI language model, I can provide information to the best of my ability based on the resources I was trained on, which were primarily before <DATE>. While I strive to provide useful and accurate responses, my knowledge is not infinite, and I might not be able to provide professional medical advice or predictions in all cases. Additionally, healthcare decisions should always be evaluated in the context of an individual's unique circumstances and should be evaluated by a healthcare professional.
```
## Model card from truehealth/Llama-2-MedText-Delta-Preview
Trained on https://huggingface.co/datasets/BI55/MedText.
These are PEFT delta weights and need to be merged into LLama-2-13b to be used for inference.
library_name: peft
Training procedure
The following bitsandbytes quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.5.0.dev0
## Setup Notes
### Download torch model
This example demonstrates using `hfdownloader` to download a torch model from HF to `./storage`
```bash
./hfdownloader -m truehealth/LLama-2-MedText-13b
```
If necessary, install `hfdownloader` from https://github.com/bodaay/HuggingFaceModelDownloader
```bash
bash <(curl -sSL https://raw.githubusercontent.com/bodaay/HuggingFaceModelDownloader/master/scripts/gist_gethfd.sh) -h
```
### Quantize torch model with llama.cpp
Quantize directly to q8_0
```bash
llama.cpp/convert.py --outtype q8_0 --outfile LLama-2-MedText-13b-q8_0.gguf ./models/Storage/truehealth_LLama-2-MedText-13b/pytorch_model-00001-of-00003.bin
```
First convert to f32 GGUF
```bash
llama.cpp/convert.py --outtype f32 --outfile LLama-2-MedText-13b-f32.gguf ./models/Storage/truehealth_LLama-2-MedText-13b/pytorch_model-00001-of-00003.bin
```
Then quantize f32 GGUF to lower bit resolutions
```bash
llama.cpp/build/bin/quantize LLama-2-MedText-13b-f32.gguf LLama-2-MedText-13b-Q3_K_L.gguf Q3_K_L
llama.cpp/build/bin/quantize LLama-2-MedText-13b-f32.gguf LLama-2-MedText-13b-Q6_K.gguf Q6_K
```
### Distributing model through huggingface
```bash
mkvirtualenv -p `which python3.11` -a . ${PWD##*/}
python -m pip install huggingface_hub
huggingface-cli login
huggingface-cli lfs-enable-largefiles .
```
|
lofcz/mistral-7b-dolphin-ff-cs2
|
lofcz
| 2023-10-09T15:40:59Z | 3 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:cognitivecomputations/dolphin-2.0-mistral-7b",
"base_model:adapter:cognitivecomputations/dolphin-2.0-mistral-7b",
"region:us"
] | null | 2023-10-09T15:40:52Z |
---
library_name: peft
base_model: ehartford/dolphin-2.0-mistral-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Undi95/Mistral-11B-v0.1
|
Undi95
| 2023-10-09T15:37:10Z | 1,536 | 17 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-09T13:16:54Z |
---
license: apache-2.0
tags:
- mistral
- pretrained
---
This is Mistral, but in 11B.
I took layers of the original Mistral-7B, and duplicated some layer, this is the first frankeinstein method that I found "acceptable" to expend Mistral.
It seems that the first 8 layers of the model is very important, having duplicate of those layers in the model make me think it confuse the model.
UPDATE: Forced mergekit to output bfloat16 file, should be the same thing, but since the base model is bfloat16, wanted it to stay bf16 like the OG model. Even if it was written bfloat16 in the config file earlier, it was float16.
<!-- description start -->
## Description
This repo contains fp16 files of Mistral-11B-v0.1.
<!-- description end -->
<!-- description start -->
## Model used
- [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1/)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## The secret sauce
```
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Special thanks to Sushi.
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
thangvip/phi-1_5-finetuned-health
|
thangvip
| 2023-10-09T15:35:30Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | 2023-10-09T15:16:16Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-health
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-health
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
fedbor/13bLLAma2_complete_ep5_cut2048_augm
|
fedbor
| 2023-10-09T15:19:16Z | 3 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-10-09T15:19:14Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
SadYui/Sysg
|
SadYui
| 2023-10-09T15:19:11Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-10-09T15:18:35Z |
---
license: bigcode-openrail-m
---
|
RogerB/afro-xlmr-large-kinre-finetuned-kin-sent3
|
RogerB
| 2023-10-09T15:18:14Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/afro-xlmr-large-kinre-finetuned",
"base_model:finetune:RogerB/afro-xlmr-large-kinre-finetuned",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T14:54:51Z |
---
license: mit
base_model: RogerB/afro-xlmr-large-kinre-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: afro-xlmr-large-kinre-finetuned-kin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinre-finetuned-kin-sent3
This model is a fine-tuned version of [RogerB/afro-xlmr-large-kinre-finetuned](https://huggingface.co/RogerB/afro-xlmr-large-kinre-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8196
- F1: 0.6813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9842 | 1.0 | 1013 | 0.7321 | 0.6975 |
| 0.7881 | 2.0 | 2026 | 0.6053 | 0.7562 |
| 0.6972 | 3.0 | 3039 | 0.5805 | 0.7782 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
vectoriseai/e5-large-v2
|
vectoriseai
| 2023-10-09T15:18:12Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-09T14:48:04Z |
---
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-large-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.22388059701493
- type: ap
value: 43.20816505595132
- type: f1
value: 73.27811303522058
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.748325
- type: ap
value: 90.72534979701297
- type: f1
value: 93.73895874282185
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.612
- type: f1
value: 47.61157345898393
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.541999999999998
- type: map_at_10
value: 38.208
- type: map_at_100
value: 39.417
- type: map_at_1000
value: 39.428999999999995
- type: map_at_3
value: 33.95
- type: map_at_5
value: 36.329
- type: mrr_at_1
value: 23.755000000000003
- type: mrr_at_10
value: 38.288
- type: mrr_at_100
value: 39.511
- type: mrr_at_1000
value: 39.523
- type: mrr_at_3
value: 34.009
- type: mrr_at_5
value: 36.434
- type: ndcg_at_1
value: 23.541999999999998
- type: ndcg_at_10
value: 46.417
- type: ndcg_at_100
value: 51.812000000000005
- type: ndcg_at_1000
value: 52.137
- type: ndcg_at_3
value: 37.528
- type: ndcg_at_5
value: 41.81
- type: precision_at_1
value: 23.541999999999998
- type: precision_at_10
value: 7.269
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.979
- type: precision_at_5
value: 11.664
- type: recall_at_1
value: 23.541999999999998
- type: recall_at_10
value: 72.688
- type: recall_at_100
value: 96.871
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 47.937000000000005
- type: recall_at_5
value: 58.321
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.546499570522094
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.01607489943561
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.616107510107774
- type: mrr
value: 72.75106626214661
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.33018094733868
- type: cos_sim_spearman
value: 83.60190492611737
- type: euclidean_pearson
value: 82.1492450218961
- type: euclidean_spearman
value: 82.70308926526991
- type: manhattan_pearson
value: 81.93959600076842
- type: manhattan_spearman
value: 82.73260801016369
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.54545454545455
- type: f1
value: 84.49582530928923
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.362725540120096
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.849509608178145
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.502999999999997
- type: map_at_10
value: 43.323
- type: map_at_100
value: 44.708999999999996
- type: map_at_1000
value: 44.838
- type: map_at_3
value: 38.987
- type: map_at_5
value: 41.516999999999996
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 49.13
- type: mrr_at_100
value: 49.697
- type: mrr_at_1000
value: 49.741
- type: mrr_at_3
value: 45.804
- type: mrr_at_5
value: 47.842
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 50.266999999999996
- type: ndcg_at_100
value: 54.967
- type: ndcg_at_1000
value: 56.976000000000006
- type: ndcg_at_3
value: 43.823
- type: ndcg_at_5
value: 47.12
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 10.057
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.125
- type: precision_at_5
value: 15.851
- type: recall_at_1
value: 31.502999999999997
- type: recall_at_10
value: 63.715999999999994
- type: recall_at_100
value: 83.61800000000001
- type: recall_at_1000
value: 96.63199999999999
- type: recall_at_3
value: 45.403
- type: recall_at_5
value: 54.481
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.833000000000002
- type: map_at_10
value: 37.330999999999996
- type: map_at_100
value: 38.580999999999996
- type: map_at_1000
value: 38.708
- type: map_at_3
value: 34.713
- type: map_at_5
value: 36.104
- type: mrr_at_1
value: 35.223
- type: mrr_at_10
value: 43.419000000000004
- type: mrr_at_100
value: 44.198
- type: mrr_at_1000
value: 44.249
- type: mrr_at_3
value: 41.614000000000004
- type: mrr_at_5
value: 42.553000000000004
- type: ndcg_at_1
value: 35.223
- type: ndcg_at_10
value: 42.687999999999995
- type: ndcg_at_100
value: 47.447
- type: ndcg_at_1000
value: 49.701
- type: ndcg_at_3
value: 39.162
- type: ndcg_at_5
value: 40.557
- type: precision_at_1
value: 35.223
- type: precision_at_10
value: 7.962
- type: precision_at_100
value: 1.304
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.184999999999999
- type: recall_at_1
value: 27.833000000000002
- type: recall_at_10
value: 51.881
- type: recall_at_100
value: 72.04
- type: recall_at_1000
value: 86.644
- type: recall_at_3
value: 40.778
- type: recall_at_5
value: 45.176
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.175
- type: map_at_10
value: 51.174
- type: map_at_100
value: 52.26499999999999
- type: map_at_1000
value: 52.315999999999995
- type: map_at_3
value: 47.897
- type: map_at_5
value: 49.703
- type: mrr_at_1
value: 43.448
- type: mrr_at_10
value: 54.505
- type: mrr_at_100
value: 55.216
- type: mrr_at_1000
value: 55.242000000000004
- type: mrr_at_3
value: 51.98500000000001
- type: mrr_at_5
value: 53.434000000000005
- type: ndcg_at_1
value: 43.448
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.537
- type: ndcg_at_1000
value: 62.546
- type: ndcg_at_3
value: 51.73799999999999
- type: ndcg_at_5
value: 54.324
- type: precision_at_1
value: 43.448
- type: precision_at_10
value: 9.292
- type: precision_at_100
value: 1.233
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.218
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.175
- type: recall_at_10
value: 72.00999999999999
- type: recall_at_100
value: 90.155
- type: recall_at_1000
value: 97.257
- type: recall_at_3
value: 57.133
- type: recall_at_5
value: 63.424
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.405
- type: map_at_10
value: 30.043
- type: map_at_100
value: 31.191000000000003
- type: map_at_1000
value: 31.275
- type: map_at_3
value: 27.034000000000002
- type: map_at_5
value: 28.688000000000002
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.993
- type: mrr_at_100
value: 32.992
- type: mrr_at_1000
value: 33.050000000000004
- type: mrr_at_3
value: 28.964000000000002
- type: mrr_at_5
value: 30.653000000000002
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 40.709
- type: ndcg_at_1000
value: 42.855
- type: ndcg_at_3
value: 29.139
- type: ndcg_at_5
value: 32.045
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.65
- type: precision_at_100
value: 0.885
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 22.405
- type: recall_at_10
value: 49.391
- type: recall_at_100
value: 74.53699999999999
- type: recall_at_1000
value: 90.605
- type: recall_at_3
value: 33.126
- type: recall_at_5
value: 40.073
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.309999999999999
- type: map_at_10
value: 20.688000000000002
- type: map_at_100
value: 22.022
- type: map_at_1000
value: 22.152
- type: map_at_3
value: 17.954
- type: map_at_5
value: 19.439
- type: mrr_at_1
value: 16.294
- type: mrr_at_10
value: 24.479
- type: mrr_at_100
value: 25.515
- type: mrr_at_1000
value: 25.593
- type: mrr_at_3
value: 21.642
- type: mrr_at_5
value: 23.189999999999998
- type: ndcg_at_1
value: 16.294
- type: ndcg_at_10
value: 25.833000000000002
- type: ndcg_at_100
value: 32.074999999999996
- type: ndcg_at_1000
value: 35.083
- type: ndcg_at_3
value: 20.493
- type: ndcg_at_5
value: 22.949
- type: precision_at_1
value: 16.294
- type: precision_at_10
value: 5.112
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.587000000000001
- type: recall_at_1
value: 13.309999999999999
- type: recall_at_10
value: 37.851
- type: recall_at_100
value: 64.835
- type: recall_at_1000
value: 86.334
- type: recall_at_3
value: 23.493
- type: recall_at_5
value: 29.528
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.857999999999997
- type: map_at_10
value: 35.503
- type: map_at_100
value: 36.957
- type: map_at_1000
value: 37.065
- type: map_at_3
value: 32.275999999999996
- type: map_at_5
value: 34.119
- type: mrr_at_1
value: 31.954
- type: mrr_at_10
value: 40.851
- type: mrr_at_100
value: 41.863
- type: mrr_at_1000
value: 41.900999999999996
- type: mrr_at_3
value: 38.129999999999995
- type: mrr_at_5
value: 39.737
- type: ndcg_at_1
value: 31.954
- type: ndcg_at_10
value: 41.343999999999994
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 49.501
- type: ndcg_at_3
value: 36.047000000000004
- type: ndcg_at_5
value: 38.639
- type: precision_at_1
value: 31.954
- type: precision_at_10
value: 7.68
- type: precision_at_100
value: 1.247
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.589
- type: recall_at_1
value: 25.857999999999997
- type: recall_at_10
value: 53.43599999999999
- type: recall_at_100
value: 78.82400000000001
- type: recall_at_1000
value: 92.78999999999999
- type: recall_at_3
value: 38.655
- type: recall_at_5
value: 45.216
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.709
- type: map_at_10
value: 34.318
- type: map_at_100
value: 35.657
- type: map_at_1000
value: 35.783
- type: map_at_3
value: 31.326999999999998
- type: map_at_5
value: 33.021
- type: mrr_at_1
value: 30.137000000000004
- type: mrr_at_10
value: 39.093
- type: mrr_at_100
value: 39.992
- type: mrr_at_1000
value: 40.056999999999995
- type: mrr_at_3
value: 36.606
- type: mrr_at_5
value: 37.861
- type: ndcg_at_1
value: 30.137000000000004
- type: ndcg_at_10
value: 39.974
- type: ndcg_at_100
value: 45.647999999999996
- type: ndcg_at_1000
value: 48.259
- type: ndcg_at_3
value: 35.028
- type: ndcg_at_5
value: 37.175999999999995
- type: precision_at_1
value: 30.137000000000004
- type: precision_at_10
value: 7.363
- type: precision_at_100
value: 1.184
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 16.857
- type: precision_at_5
value: 11.963
- type: recall_at_1
value: 24.709
- type: recall_at_10
value: 52.087
- type: recall_at_100
value: 76.125
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 38.149
- type: recall_at_5
value: 43.984
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.40791666666667
- type: map_at_10
value: 32.458083333333335
- type: map_at_100
value: 33.691916666666664
- type: map_at_1000
value: 33.81191666666666
- type: map_at_3
value: 29.51625
- type: map_at_5
value: 31.168083333333335
- type: mrr_at_1
value: 27.96591666666666
- type: mrr_at_10
value: 36.528583333333344
- type: mrr_at_100
value: 37.404
- type: mrr_at_1000
value: 37.464333333333336
- type: mrr_at_3
value: 33.92883333333333
- type: mrr_at_5
value: 35.41933333333333
- type: ndcg_at_1
value: 27.96591666666666
- type: ndcg_at_10
value: 37.89141666666666
- type: ndcg_at_100
value: 43.23066666666666
- type: ndcg_at_1000
value: 45.63258333333333
- type: ndcg_at_3
value: 32.811249999999994
- type: ndcg_at_5
value: 35.22566666666667
- type: precision_at_1
value: 27.96591666666666
- type: precision_at_10
value: 6.834083333333332
- type: precision_at_100
value: 1.12225
- type: precision_at_1000
value: 0.15241666666666667
- type: precision_at_3
value: 15.264333333333335
- type: precision_at_5
value: 11.039416666666666
- type: recall_at_1
value: 23.40791666666667
- type: recall_at_10
value: 49.927083333333336
- type: recall_at_100
value: 73.44641666666668
- type: recall_at_1000
value: 90.19950000000001
- type: recall_at_3
value: 35.88341666666667
- type: recall_at_5
value: 42.061249999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.592000000000002
- type: map_at_10
value: 26.895999999999997
- type: map_at_100
value: 27.921000000000003
- type: map_at_1000
value: 28.02
- type: map_at_3
value: 24.883
- type: map_at_5
value: 25.812
- type: mrr_at_1
value: 22.698999999999998
- type: mrr_at_10
value: 29.520999999999997
- type: mrr_at_100
value: 30.458000000000002
- type: mrr_at_1000
value: 30.526999999999997
- type: mrr_at_3
value: 27.633000000000003
- type: mrr_at_5
value: 28.483999999999998
- type: ndcg_at_1
value: 22.698999999999998
- type: ndcg_at_10
value: 31.061
- type: ndcg_at_100
value: 36.398
- type: ndcg_at_1000
value: 38.89
- type: ndcg_at_3
value: 27.149
- type: ndcg_at_5
value: 28.627000000000002
- type: precision_at_1
value: 22.698999999999998
- type: precision_at_10
value: 5.106999999999999
- type: precision_at_100
value: 0.857
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 11.963
- type: precision_at_5
value: 8.221
- type: recall_at_1
value: 19.592000000000002
- type: recall_at_10
value: 41.329
- type: recall_at_100
value: 66.094
- type: recall_at_1000
value: 84.511
- type: recall_at_3
value: 30.61
- type: recall_at_5
value: 34.213
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.71
- type: map_at_10
value: 20.965
- type: map_at_100
value: 21.994
- type: map_at_1000
value: 22.133
- type: map_at_3
value: 18.741
- type: map_at_5
value: 19.951
- type: mrr_at_1
value: 18.307000000000002
- type: mrr_at_10
value: 24.66
- type: mrr_at_100
value: 25.540000000000003
- type: mrr_at_1000
value: 25.629
- type: mrr_at_3
value: 22.511
- type: mrr_at_5
value: 23.72
- type: ndcg_at_1
value: 18.307000000000002
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 30.229
- type: ndcg_at_1000
value: 33.623
- type: ndcg_at_3
value: 21.203
- type: ndcg_at_5
value: 23.006999999999998
- type: precision_at_1
value: 18.307000000000002
- type: precision_at_10
value: 4.725
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.14
- type: precision_at_5
value: 7.481
- type: recall_at_1
value: 14.71
- type: recall_at_10
value: 34.087
- type: recall_at_100
value: 57.147999999999996
- type: recall_at_1000
value: 81.777
- type: recall_at_3
value: 22.996
- type: recall_at_5
value: 27.73
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.472
- type: map_at_10
value: 32.699
- type: map_at_100
value: 33.867000000000004
- type: map_at_1000
value: 33.967000000000006
- type: map_at_3
value: 29.718
- type: map_at_5
value: 31.345
- type: mrr_at_1
value: 28.265
- type: mrr_at_10
value: 36.945
- type: mrr_at_100
value: 37.794
- type: mrr_at_1000
value: 37.857
- type: mrr_at_3
value: 34.266000000000005
- type: mrr_at_5
value: 35.768
- type: ndcg_at_1
value: 28.265
- type: ndcg_at_10
value: 38.35
- type: ndcg_at_100
value: 43.739
- type: ndcg_at_1000
value: 46.087
- type: ndcg_at_3
value: 33.004
- type: ndcg_at_5
value: 35.411
- type: precision_at_1
value: 28.265
- type: precision_at_10
value: 6.715999999999999
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 15.299
- type: precision_at_5
value: 10.951
- type: recall_at_1
value: 23.472
- type: recall_at_10
value: 51.413
- type: recall_at_100
value: 75.17
- type: recall_at_1000
value: 91.577
- type: recall_at_3
value: 36.651
- type: recall_at_5
value: 42.814
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.666
- type: map_at_10
value: 32.963
- type: map_at_100
value: 34.544999999999995
- type: map_at_1000
value: 34.792
- type: map_at_3
value: 29.74
- type: map_at_5
value: 31.5
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 38.013000000000005
- type: mrr_at_100
value: 38.997
- type: mrr_at_1000
value: 39.055
- type: mrr_at_3
value: 34.947
- type: mrr_at_5
value: 36.815
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.361000000000004
- type: ndcg_at_100
value: 45.186
- type: ndcg_at_1000
value: 47.867
- type: ndcg_at_3
value: 33.797
- type: ndcg_at_5
value: 36.456
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 15.876000000000001
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 23.666
- type: recall_at_10
value: 51.858000000000004
- type: recall_at_100
value: 77.805
- type: recall_at_1000
value: 94.504
- type: recall_at_3
value: 36.207
- type: recall_at_5
value: 43.094
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.662
- type: map_at_10
value: 23.594
- type: map_at_100
value: 24.593999999999998
- type: map_at_1000
value: 24.694
- type: map_at_3
value: 20.925
- type: map_at_5
value: 22.817999999999998
- type: mrr_at_1
value: 17.375
- type: mrr_at_10
value: 25.734
- type: mrr_at_100
value: 26.586
- type: mrr_at_1000
value: 26.671
- type: mrr_at_3
value: 23.044
- type: mrr_at_5
value: 24.975
- type: ndcg_at_1
value: 17.375
- type: ndcg_at_10
value: 28.186
- type: ndcg_at_100
value: 33.436
- type: ndcg_at_1000
value: 36.203
- type: ndcg_at_3
value: 23.152
- type: ndcg_at_5
value: 26.397
- type: precision_at_1
value: 17.375
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.786
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 15.662
- type: recall_at_10
value: 40.066
- type: recall_at_100
value: 65.006
- type: recall_at_1000
value: 85.94000000000001
- type: recall_at_3
value: 27.400000000000002
- type: recall_at_5
value: 35.002
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.853
- type: map_at_10
value: 15.568000000000001
- type: map_at_100
value: 17.383000000000003
- type: map_at_1000
value: 17.584
- type: map_at_3
value: 12.561
- type: map_at_5
value: 14.056
- type: mrr_at_1
value: 18.958
- type: mrr_at_10
value: 28.288000000000004
- type: mrr_at_100
value: 29.432000000000002
- type: mrr_at_1000
value: 29.498
- type: mrr_at_3
value: 25.049
- type: mrr_at_5
value: 26.857
- type: ndcg_at_1
value: 18.958
- type: ndcg_at_10
value: 22.21
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 33.583
- type: ndcg_at_3
value: 16.994999999999997
- type: ndcg_at_5
value: 18.95
- type: precision_at_1
value: 18.958
- type: precision_at_10
value: 7.192
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 12.573
- type: precision_at_5
value: 10.202
- type: recall_at_1
value: 8.853
- type: recall_at_10
value: 28.087
- type: recall_at_100
value: 53.701
- type: recall_at_1000
value: 76.29899999999999
- type: recall_at_3
value: 15.913
- type: recall_at_5
value: 20.658
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.077
- type: map_at_10
value: 20.788999999999998
- type: map_at_100
value: 30.429000000000002
- type: map_at_1000
value: 32.143
- type: map_at_3
value: 14.692
- type: map_at_5
value: 17.139
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.036
- type: mrr_at_100
value: 78.401
- type: mrr_at_1000
value: 78.404
- type: mrr_at_3
value: 76.75
- type: mrr_at_5
value: 77.47500000000001
- type: ndcg_at_1
value: 58.12500000000001
- type: ndcg_at_10
value: 44.015
- type: ndcg_at_100
value: 49.247
- type: ndcg_at_1000
value: 56.211999999999996
- type: ndcg_at_3
value: 49.151
- type: ndcg_at_5
value: 46.195
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 35.5
- type: precision_at_100
value: 11.355
- type: precision_at_1000
value: 2.1950000000000003
- type: precision_at_3
value: 53.083000000000006
- type: precision_at_5
value: 44.800000000000004
- type: recall_at_1
value: 9.077
- type: recall_at_10
value: 26.259
- type: recall_at_100
value: 56.547000000000004
- type: recall_at_1000
value: 78.551
- type: recall_at_3
value: 16.162000000000003
- type: recall_at_5
value: 19.753999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.44500000000001
- type: f1
value: 44.67067691783401
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.182
- type: map_at_10
value: 78.223
- type: map_at_100
value: 78.498
- type: map_at_1000
value: 78.512
- type: map_at_3
value: 76.71
- type: map_at_5
value: 77.725
- type: mrr_at_1
value: 73.177
- type: mrr_at_10
value: 82.513
- type: mrr_at_100
value: 82.633
- type: mrr_at_1000
value: 82.635
- type: mrr_at_3
value: 81.376
- type: mrr_at_5
value: 82.182
- type: ndcg_at_1
value: 73.177
- type: ndcg_at_10
value: 82.829
- type: ndcg_at_100
value: 83.84
- type: ndcg_at_1000
value: 84.07900000000001
- type: ndcg_at_3
value: 80.303
- type: ndcg_at_5
value: 81.846
- type: precision_at_1
value: 73.177
- type: precision_at_10
value: 10.241999999999999
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 31.247999999999998
- type: precision_at_5
value: 19.697
- type: recall_at_1
value: 68.182
- type: recall_at_10
value: 92.657
- type: recall_at_100
value: 96.709
- type: recall_at_1000
value: 98.184
- type: recall_at_3
value: 85.9
- type: recall_at_5
value: 89.755
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.108
- type: map_at_10
value: 33.342
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.478
- type: map_at_3
value: 29.067
- type: map_at_5
value: 31.563000000000002
- type: mrr_at_1
value: 41.667
- type: mrr_at_10
value: 49.913000000000004
- type: mrr_at_100
value: 50.724000000000004
- type: mrr_at_1000
value: 50.766
- type: mrr_at_3
value: 47.504999999999995
- type: mrr_at_5
value: 49.033
- type: ndcg_at_1
value: 41.667
- type: ndcg_at_10
value: 41.144
- type: ndcg_at_100
value: 48.326
- type: ndcg_at_1000
value: 51.486
- type: ndcg_at_3
value: 37.486999999999995
- type: ndcg_at_5
value: 38.78
- type: precision_at_1
value: 41.667
- type: precision_at_10
value: 11.358
- type: precision_at_100
value: 1.873
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 25
- type: precision_at_5
value: 18.519
- type: recall_at_1
value: 21.108
- type: recall_at_10
value: 47.249
- type: recall_at_100
value: 74.52
- type: recall_at_1000
value: 93.31
- type: recall_at_3
value: 33.271
- type: recall_at_5
value: 39.723000000000006
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.317
- type: map_at_10
value: 64.861
- type: map_at_100
value: 65.697
- type: map_at_1000
value: 65.755
- type: map_at_3
value: 61.258
- type: map_at_5
value: 63.590999999999994
- type: mrr_at_1
value: 80.635
- type: mrr_at_10
value: 86.528
- type: mrr_at_100
value: 86.66199999999999
- type: mrr_at_1000
value: 86.666
- type: mrr_at_3
value: 85.744
- type: mrr_at_5
value: 86.24300000000001
- type: ndcg_at_1
value: 80.635
- type: ndcg_at_10
value: 73.13199999999999
- type: ndcg_at_100
value: 75.927
- type: ndcg_at_1000
value: 76.976
- type: ndcg_at_3
value: 68.241
- type: ndcg_at_5
value: 71.071
- type: precision_at_1
value: 80.635
- type: precision_at_10
value: 15.326
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 43.961
- type: precision_at_5
value: 28.599999999999998
- type: recall_at_1
value: 40.317
- type: recall_at_10
value: 76.631
- type: recall_at_100
value: 87.495
- type: recall_at_1000
value: 94.362
- type: recall_at_3
value: 65.94200000000001
- type: recall_at_5
value: 71.499
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.686
- type: ap
value: 87.5577120393173
- type: f1
value: 91.6629447355139
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.702
- type: map_at_10
value: 36.414
- type: map_at_100
value: 37.561
- type: map_at_1000
value: 37.605
- type: map_at_3
value: 32.456
- type: map_at_5
value: 34.827000000000005
- type: mrr_at_1
value: 24.355
- type: mrr_at_10
value: 37.01
- type: mrr_at_100
value: 38.085
- type: mrr_at_1000
value: 38.123000000000005
- type: mrr_at_3
value: 33.117999999999995
- type: mrr_at_5
value: 35.452
- type: ndcg_at_1
value: 24.384
- type: ndcg_at_10
value: 43.456
- type: ndcg_at_100
value: 48.892
- type: ndcg_at_1000
value: 49.964
- type: ndcg_at_3
value: 35.475
- type: ndcg_at_5
value: 39.711
- type: precision_at_1
value: 24.384
- type: precision_at_10
value: 6.7940000000000005
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.052999999999999
- type: precision_at_5
value: 11.189
- type: recall_at_1
value: 23.702
- type: recall_at_10
value: 65.057
- type: recall_at_100
value: 90.021
- type: recall_at_1000
value: 98.142
- type: recall_at_3
value: 43.551
- type: recall_at_5
value: 53.738
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.62380300957591
- type: f1
value: 94.49871222100734
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.14090287277702
- type: f1
value: 60.32101258220515
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.84330867518494
- type: f1
value: 71.92248688515255
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.10692669804976
- type: f1
value: 77.9904839122866
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.822988923078444
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.38394880253403
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.82504612539082
- type: mrr
value: 32.84462298174977
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.029
- type: map_at_10
value: 14.088999999999999
- type: map_at_100
value: 17.601
- type: map_at_1000
value: 19.144
- type: map_at_3
value: 10.156
- type: map_at_5
value: 11.892
- type: mrr_at_1
value: 46.44
- type: mrr_at_10
value: 56.596999999999994
- type: mrr_at_100
value: 57.11000000000001
- type: mrr_at_1000
value: 57.14
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.774
- type: ndcg_at_1
value: 44.891999999999996
- type: ndcg_at_10
value: 37.134
- type: ndcg_at_100
value: 33.652
- type: ndcg_at_1000
value: 42.548
- type: ndcg_at_3
value: 41.851
- type: ndcg_at_5
value: 39.842
- type: precision_at_1
value: 46.44
- type: precision_at_10
value: 27.647
- type: precision_at_100
value: 8.309999999999999
- type: precision_at_1000
value: 2.146
- type: precision_at_3
value: 39.422000000000004
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 6.029
- type: recall_at_10
value: 18.907
- type: recall_at_100
value: 33.76
- type: recall_at_1000
value: 65.14999999999999
- type: recall_at_3
value: 11.584999999999999
- type: recall_at_5
value: 14.626
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.373000000000005
- type: map_at_10
value: 55.836
- type: map_at_100
value: 56.611999999999995
- type: map_at_1000
value: 56.63
- type: map_at_3
value: 51.747
- type: map_at_5
value: 54.337999999999994
- type: mrr_at_1
value: 44.147999999999996
- type: mrr_at_10
value: 58.42699999999999
- type: mrr_at_100
value: 58.902
- type: mrr_at_1000
value: 58.914
- type: mrr_at_3
value: 55.156000000000006
- type: mrr_at_5
value: 57.291000000000004
- type: ndcg_at_1
value: 44.119
- type: ndcg_at_10
value: 63.444
- type: ndcg_at_100
value: 66.40599999999999
- type: ndcg_at_1000
value: 66.822
- type: ndcg_at_3
value: 55.962
- type: ndcg_at_5
value: 60.228
- type: precision_at_1
value: 44.119
- type: precision_at_10
value: 10.006
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.135
- type: precision_at_5
value: 17.59
- type: recall_at_1
value: 39.373000000000005
- type: recall_at_10
value: 83.78999999999999
- type: recall_at_100
value: 96.246
- type: recall_at_1000
value: 99.324
- type: recall_at_3
value: 64.71900000000001
- type: recall_at_5
value: 74.508
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.199
- type: map_at_10
value: 82.892
- type: map_at_100
value: 83.578
- type: map_at_1000
value: 83.598
- type: map_at_3
value: 79.948
- type: map_at_5
value: 81.779
- type: mrr_at_1
value: 79.67
- type: mrr_at_10
value: 86.115
- type: mrr_at_100
value: 86.249
- type: mrr_at_1000
value: 86.251
- type: mrr_at_3
value: 85.08200000000001
- type: mrr_at_5
value: 85.783
- type: ndcg_at_1
value: 79.67
- type: ndcg_at_10
value: 86.839
- type: ndcg_at_100
value: 88.252
- type: ndcg_at_1000
value: 88.401
- type: ndcg_at_3
value: 83.86200000000001
- type: ndcg_at_5
value: 85.473
- type: precision_at_1
value: 79.67
- type: precision_at_10
value: 13.19
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.677
- type: precision_at_5
value: 24.118000000000002
- type: recall_at_1
value: 69.199
- type: recall_at_10
value: 94.321
- type: recall_at_100
value: 99.20400000000001
- type: recall_at_1000
value: 99.947
- type: recall_at_3
value: 85.787
- type: recall_at_5
value: 90.365
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82810046856353
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.38132611783628
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.127000000000001
- type: map_at_10
value: 12.235
- type: map_at_100
value: 14.417
- type: map_at_1000
value: 14.75
- type: map_at_3
value: 8.906
- type: map_at_5
value: 10.591000000000001
- type: mrr_at_1
value: 25.2
- type: mrr_at_10
value: 35.879
- type: mrr_at_100
value: 36.935
- type: mrr_at_1000
value: 36.997
- type: mrr_at_3
value: 32.783
- type: mrr_at_5
value: 34.367999999999995
- type: ndcg_at_1
value: 25.2
- type: ndcg_at_10
value: 20.509
- type: ndcg_at_100
value: 28.67
- type: ndcg_at_1000
value: 34.42
- type: ndcg_at_3
value: 19.948
- type: ndcg_at_5
value: 17.166
- type: precision_at_1
value: 25.2
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_100
value: 2.214
- type: precision_at_1000
value: 0.359
- type: precision_at_3
value: 18.533
- type: precision_at_5
value: 14.860000000000001
- type: recall_at_1
value: 5.127000000000001
- type: recall_at_10
value: 21.147
- type: recall_at_100
value: 44.946999999999996
- type: recall_at_1000
value: 72.89
- type: recall_at_3
value: 11.277
- type: recall_at_5
value: 15.042
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.0373011786213
- type: cos_sim_spearman
value: 79.27889560856613
- type: euclidean_pearson
value: 80.31186315495655
- type: euclidean_spearman
value: 79.41630415280811
- type: manhattan_pearson
value: 80.31755140442013
- type: manhattan_spearman
value: 79.43069870027611
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.8659751342045
- type: cos_sim_spearman
value: 76.95377612997667
- type: euclidean_pearson
value: 81.24552945497848
- type: euclidean_spearman
value: 77.18236963555253
- type: manhattan_pearson
value: 81.26477607759037
- type: manhattan_spearman
value: 77.13821753062756
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.34597139044875
- type: cos_sim_spearman
value: 84.124169425592
- type: euclidean_pearson
value: 83.68590721511401
- type: euclidean_spearman
value: 84.18846190846398
- type: manhattan_pearson
value: 83.57630235061498
- type: manhattan_spearman
value: 84.10244043726902
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67641885599572
- type: cos_sim_spearman
value: 80.46450725650428
- type: euclidean_pearson
value: 81.61645042715865
- type: euclidean_spearman
value: 80.61418394236874
- type: manhattan_pearson
value: 81.55712034928871
- type: manhattan_spearman
value: 80.57905670523951
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.86650310886782
- type: cos_sim_spearman
value: 89.76081629222328
- type: euclidean_pearson
value: 89.1530747029954
- type: euclidean_spearman
value: 89.80990657280248
- type: manhattan_pearson
value: 89.10640563278132
- type: manhattan_spearman
value: 89.76282108434047
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.93864027911118
- type: cos_sim_spearman
value: 85.47096193999023
- type: euclidean_pearson
value: 85.03141840870533
- type: euclidean_spearman
value: 85.43124029598181
- type: manhattan_pearson
value: 84.99002664393512
- type: manhattan_spearman
value: 85.39169195120834
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.7045343749832
- type: cos_sim_spearman
value: 89.03262221146677
- type: euclidean_pearson
value: 89.56078218264365
- type: euclidean_spearman
value: 89.17827006466868
- type: manhattan_pearson
value: 89.52717595468582
- type: manhattan_spearman
value: 89.15878115952923
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.20191302875551
- type: cos_sim_spearman
value: 64.11446552557646
- type: euclidean_pearson
value: 64.6918197393619
- type: euclidean_spearman
value: 63.440182631197764
- type: manhattan_pearson
value: 64.55692904121835
- type: manhattan_spearman
value: 63.424877742756266
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.37793104662344
- type: cos_sim_spearman
value: 87.7357802629067
- type: euclidean_pearson
value: 87.4286301545109
- type: euclidean_spearman
value: 87.78452920777421
- type: manhattan_pearson
value: 87.42445169331255
- type: manhattan_spearman
value: 87.78537677249598
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.31465405081792
- type: mrr
value: 95.7173781193389
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.904
- type: map_at_100
value: 68.539
- type: map_at_1000
value: 68.562
- type: map_at_3
value: 65.415
- type: map_at_5
value: 66.788
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 68.797
- type: mrr_at_100
value: 69.236
- type: mrr_at_1000
value: 69.257
- type: mrr_at_3
value: 66.667
- type: mrr_at_5
value: 67.967
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 72.24199999999999
- type: ndcg_at_100
value: 74.86
- type: ndcg_at_1000
value: 75.354
- type: ndcg_at_3
value: 67.93400000000001
- type: ndcg_at_5
value: 70.02199999999999
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.778000000000002
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.383
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.094
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8029702970297
- type: cos_sim_ap
value: 94.9210324173411
- type: cos_sim_f1
value: 89.8521162672106
- type: cos_sim_precision
value: 91.67533818938605
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.69504950495049
- type: dot_ap
value: 90.4919719146181
- type: dot_f1
value: 84.72289156626506
- type: dot_precision
value: 81.76744186046511
- type: dot_recall
value: 87.9
- type: euclidean_accuracy
value: 99.79702970297029
- type: euclidean_ap
value: 94.87827463795753
- type: euclidean_f1
value: 89.55680081507896
- type: euclidean_precision
value: 91.27725856697819
- type: euclidean_recall
value: 87.9
- type: manhattan_accuracy
value: 99.7990099009901
- type: manhattan_ap
value: 94.87587025149682
- type: manhattan_f1
value: 89.76298537569339
- type: manhattan_precision
value: 90.53916581892166
- type: manhattan_recall
value: 89
- type: max_accuracy
value: 99.8029702970297
- type: max_ap
value: 94.9210324173411
- type: max_f1
value: 89.8521162672106
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.92385753948724
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.671756975431144
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.677928036739004
- type: mrr
value: 51.56413133435193
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.523589340819683
- type: cos_sim_spearman
value: 30.187407518823235
- type: dot_pearson
value: 29.039713969699015
- type: dot_spearman
value: 29.114740651155508
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.211
- type: map_at_10
value: 1.6199999999999999
- type: map_at_100
value: 8.658000000000001
- type: map_at_1000
value: 21.538
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.919
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.18599999999999
- type: mrr_at_100
value: 86.18599999999999
- type: mrr_at_1000
value: 86.18599999999999
- type: mrr_at_3
value: 85
- type: mrr_at_5
value: 85.9
- type: ndcg_at_1
value: 74
- type: ndcg_at_10
value: 66.542
- type: ndcg_at_100
value: 50.163999999999994
- type: ndcg_at_1000
value: 45.696999999999996
- type: ndcg_at_3
value: 71.531
- type: ndcg_at_5
value: 70.45
- type: precision_at_1
value: 78
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 51.06
- type: precision_at_1000
value: 20.022000000000002
- type: precision_at_3
value: 76
- type: precision_at_5
value: 74.8
- type: recall_at_1
value: 0.211
- type: recall_at_10
value: 1.813
- type: recall_at_100
value: 12.098
- type: recall_at_1000
value: 42.618
- type: recall_at_3
value: 0.603
- type: recall_at_5
value: 0.987
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.2079999999999997
- type: map_at_10
value: 7.777000000000001
- type: map_at_100
value: 12.825000000000001
- type: map_at_1000
value: 14.196
- type: map_at_3
value: 4.285
- type: map_at_5
value: 6.177
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 42.635
- type: mrr_at_100
value: 43.955
- type: mrr_at_1000
value: 43.955
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.088
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 20.666999999999998
- type: ndcg_at_100
value: 31.840000000000003
- type: ndcg_at_1000
value: 43.191
- type: ndcg_at_3
value: 23.45
- type: ndcg_at_5
value: 22.994
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 17.959
- type: precision_at_100
value: 6.755
- type: precision_at_1000
value: 1.4200000000000002
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 23.673
- type: recall_at_1
value: 2.2079999999999997
- type: recall_at_10
value: 13.144
- type: recall_at_100
value: 42.491
- type: recall_at_1000
value: 77.04299999999999
- type: recall_at_3
value: 5.3469999999999995
- type: recall_at_5
value: 9.139
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9044
- type: ap
value: 14.625783489340755
- type: f1
value: 54.814936562590546
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.94227504244483
- type: f1
value: 61.22516038508854
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.602409155145864
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.94641473445789
- type: cos_sim_ap
value: 76.91572747061197
- type: cos_sim_f1
value: 70.14348097317529
- type: cos_sim_precision
value: 66.53254437869822
- type: cos_sim_recall
value: 74.1688654353562
- type: dot_accuracy
value: 84.80061989628658
- type: dot_ap
value: 70.7952548895177
- type: dot_f1
value: 65.44780728844965
- type: dot_precision
value: 61.53310104529617
- type: dot_recall
value: 69.89445910290237
- type: euclidean_accuracy
value: 86.94641473445789
- type: euclidean_ap
value: 76.80774009393652
- type: euclidean_f1
value: 70.30522503879979
- type: euclidean_precision
value: 68.94977168949772
- type: euclidean_recall
value: 71.71503957783642
- type: manhattan_accuracy
value: 86.8629671574179
- type: manhattan_ap
value: 76.76518632600317
- type: manhattan_f1
value: 70.16056518946692
- type: manhattan_precision
value: 68.360450563204
- type: manhattan_recall
value: 72.0580474934037
- type: max_accuracy
value: 86.94641473445789
- type: max_ap
value: 76.91572747061197
- type: max_f1
value: 70.30522503879979
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.10428066907285
- type: cos_sim_ap
value: 86.25114759921435
- type: cos_sim_f1
value: 78.37857884586856
- type: cos_sim_precision
value: 75.60818546078993
- type: cos_sim_recall
value: 81.35971666153372
- type: dot_accuracy
value: 87.41995575736406
- type: dot_ap
value: 81.51838010086782
- type: dot_f1
value: 74.77398015435503
- type: dot_precision
value: 71.53002390662354
- type: dot_recall
value: 78.32614721281182
- type: euclidean_accuracy
value: 89.12368533395428
- type: euclidean_ap
value: 86.33456799874504
- type: euclidean_f1
value: 78.45496750232127
- type: euclidean_precision
value: 75.78388462366364
- type: euclidean_recall
value: 81.32121958731136
- type: manhattan_accuracy
value: 89.10622113556099
- type: manhattan_ap
value: 86.31215061745333
- type: manhattan_f1
value: 78.40684906011539
- type: manhattan_precision
value: 75.89536643366722
- type: manhattan_recall
value: 81.09023714197721
- type: max_accuracy
value: 89.12368533395428
- type: max_ap
value: 86.33456799874504
- type: max_f1
value: 78.45496750232127
language:
- en
license: mit
---
# E5-large-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
model = AutoModel.from_pretrained('intfloat/e5-large-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large-v2')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
yisays/llama2-7b-ft-lr-faq
|
yisays
| 2023-10-09T15:08:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-09T15:08:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
lberglund/thumbs_up_lora
|
lberglund
| 2023-10-09T15:06:06Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-09T14:22:37Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of a person showing thumbs up
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - lberglund/thumbs_up_lora
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of a person showing thumbs up using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
MattStammers/appo-atari_bowling
|
MattStammers
| 2023-10-09T15:04:57Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-25T20:11:08Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_bowling
type: atari_bowling
metrics:
- type: mean_reward
value: 46.40 +/- 5.28
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_bowling** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/APPO-atari_bowling
```
## About the Model
This model as with all the others in the benchmarks was trained initially asynchronously un-seeded to 10 million steps for the purposes of setting a sample factory async baseline for this model on this environment but only 3/57 made it.
The aim is to reach state-of-the-art (SOTA) performance on each atari environment. I will flag the models with SOTA when they reach at or near these levels.
The hyperparameters used in the model are the ones I have pushed to my fork of sample-factory: https://github.com/MattStammers/sample-factory. Given that https://huggingface.co/edbeeching has kindly shared his.
I saved time and energy by using many of his tuned hyperparameters to maximise performance. However, he used 2 billion training steps. I have started as explained above at 10 million then moved to 100m to see how performance goes:
```
hyperparameters = {
"device": "gpu",
"seed": 1234,
"num_policies": 2,
"async_rl": true,
"serial_mode": false,
"batched_sampling": true,
"num_batches_to_accumulate": 2,
"worker_num_splits": 1,
"policy_workers_per_policy": 1,
"max_policy_lag": 1000,
"num_workers": 16,
"num_envs_per_worker": 2,
"batch_size": 1024,
"num_batches_per_epoch": 8,
"num_epochs": 4,
"rollout": 128,
"recurrence": 1,
"shuffle_minibatches": false,
"gamma": 0.99,
"reward_scale": 1.0,
"reward_clip": 1000.0,
"value_bootstrap": false,
"normalize_returns": true,
"exploration_loss_coeff": 0.0004677351413,
"value_loss_coeff": 0.5,
"kl_loss_coeff": 0.0,
"exploration_loss": "entropy",
"gae_lambda": 0.95,
"ppo_clip_ratio": 0.1,
"ppo_clip_value": 1.0,
"with_vtrace": false,
"vtrace_rho": 1.0,
"vtrace_c": 1.0,
"optimizer": "adam",
"adam_eps": 1e-05,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"max_grad_norm": 0.0,
"learning_rate": 0.0003033891184,
"lr_schedule": "linear_decay",
"lr_schedule_kl_threshold": 0.008,
"lr_adaptive_min": 1e-06,
"lr_adaptive_max": 0.01,
"obs_subtract_mean": 0.0,
"obs_scale": 255.0,
"normalize_input": true,
"normalize_input_keys": [
"obs"
],
"decorrelate_experience_max_seconds": 0,
"decorrelate_envs_on_one_worker": true,
"actor_worker_gpus": [],
"set_workers_cpu_affinity": true,
"force_envs_single_thread": false,
"default_niceness": 0,
"log_to_file": true,
"experiment_summaries_interval": 3,
"flush_summaries_interval": 30,
"stats_avg": 100,
"summaries_use_frameskip": true,
"heartbeat_interval": 10,
"heartbeat_reporting_interval": 60,
"train_for_env_steps": 100000000,
"train_for_seconds": 10000000000,
"save_every_sec": 120,
"keep_checkpoints": 2,
"load_checkpoint_kind": "latest",
"save_milestones_sec": 1200,
"save_best_every_sec": 5,
"save_best_metric": "reward",
"save_best_after": 100000,
"benchmark": false,
"encoder_mlp_layers": [
512,
512
],
"encoder_conv_architecture": "convnet_atari",
"encoder_conv_mlp_layers": [
512
],
"use_rnn": false,
"rnn_size": 512,
"rnn_type": "gru",
"rnn_num_layers": 1,
"decoder_mlp_layers": [],
"nonlinearity": "relu",
"policy_initialization": "orthogonal",
"policy_init_gain": 1.0,
"actor_critic_share_weights": true,
"adaptive_stddev": false,
"continuous_tanh_scale": 0.0,
"initial_stddev": 1.0,
"use_env_info_cache": false,
"env_gpu_actions": false,
"env_gpu_observations": true,
"env_frameskip": 4,
"env_framestack": 4,
}
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.atari.enjoy_atari --algo=APPO --env=atari_bowling --train_dir=./train_dir --experiment=APPO-atari_bowling
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.atari.train_atari --algo=APPO --env=atari_bowling --train_dir=./train_dir --experiment=APPO-atari_bowling --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jluckyboyj/hoa-7b-test-exam-12-2
|
jluckyboyj
| 2023-10-09T15:00:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-09T14:02:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
kenghweetan/clothing_category_model
|
kenghweetan
| 2023-10-09T14:53:14Z | 237 | 3 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-09T14:32:18Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clothing_category_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clothing_category_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4070
- Accuracy: 0.2103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.695 | 1.0 | 551 | 4.4070 | 0.2103 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
vectoriseai/gte-base
|
vectoriseai
| 2023-10-09T14:42:04Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"mteb",
"sentence-similarity",
"Sentence Transformers",
"en",
"arxiv:2308.03281",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-09T14:22:48Z |
---
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
model-index:
- name: gte-base
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.17910447761193
- type: ap
value: 36.827146398068926
- type: f1
value: 68.11292888046363
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.77345000000001
- type: ap
value: 88.33530426691347
- type: f1
value: 91.76549906404642
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.964
- type: f1
value: 48.22995586184998
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.147999999999996
- type: map_at_10
value: 48.253
- type: map_at_100
value: 49.038
- type: map_at_1000
value: 49.042
- type: map_at_3
value: 43.433
- type: map_at_5
value: 46.182
- type: mrr_at_1
value: 32.717
- type: mrr_at_10
value: 48.467
- type: mrr_at_100
value: 49.252
- type: mrr_at_1000
value: 49.254999999999995
- type: mrr_at_3
value: 43.599
- type: mrr_at_5
value: 46.408
- type: ndcg_at_1
value: 32.147999999999996
- type: ndcg_at_10
value: 57.12199999999999
- type: ndcg_at_100
value: 60.316
- type: ndcg_at_1000
value: 60.402
- type: ndcg_at_3
value: 47.178
- type: ndcg_at_5
value: 52.146
- type: precision_at_1
value: 32.147999999999996
- type: precision_at_10
value: 8.542
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.346
- type: precision_at_5
value: 14.026
- type: recall_at_1
value: 32.147999999999996
- type: recall_at_10
value: 85.42
- type: recall_at_100
value: 99.004
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 58.037000000000006
- type: recall_at_5
value: 70.128
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.59706013699614
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.01463593002057
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.80250355752458
- type: mrr
value: 74.79455216989844
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.87448576082345
- type: cos_sim_spearman
value: 87.64235843637468
- type: euclidean_pearson
value: 88.4901825511062
- type: euclidean_spearman
value: 87.74537283182033
- type: manhattan_pearson
value: 88.39040638362911
- type: manhattan_spearman
value: 87.62669542888003
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.06818181818183
- type: f1
value: 85.02524460098233
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.20471092679967
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.58967592147641
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.411
- type: map_at_10
value: 45.162
- type: map_at_100
value: 46.717
- type: map_at_1000
value: 46.836
- type: map_at_3
value: 41.428
- type: map_at_5
value: 43.54
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 51.534
- type: mrr_at_100
value: 52.185
- type: mrr_at_1000
value: 52.22
- type: mrr_at_3
value: 49.046
- type: mrr_at_5
value: 50.548
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 52.235
- type: ndcg_at_100
value: 57.4
- type: ndcg_at_1000
value: 58.982
- type: ndcg_at_3
value: 47.332
- type: ndcg_at_5
value: 49.62
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 10.258000000000001
- type: precision_at_100
value: 1.6219999999999999
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 23.462
- type: precision_at_5
value: 16.71
- type: recall_at_1
value: 32.411
- type: recall_at_10
value: 65.408
- type: recall_at_100
value: 87.248
- type: recall_at_1000
value: 96.951
- type: recall_at_3
value: 50.349999999999994
- type: recall_at_5
value: 57.431
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.911
- type: map_at_10
value: 42.608000000000004
- type: map_at_100
value: 43.948
- type: map_at_1000
value: 44.089
- type: map_at_3
value: 39.652
- type: map_at_5
value: 41.236
- type: mrr_at_1
value: 40.064
- type: mrr_at_10
value: 48.916
- type: mrr_at_100
value: 49.539
- type: mrr_at_1000
value: 49.583
- type: mrr_at_3
value: 46.741
- type: mrr_at_5
value: 48.037
- type: ndcg_at_1
value: 40.064
- type: ndcg_at_10
value: 48.442
- type: ndcg_at_100
value: 52.798
- type: ndcg_at_1000
value: 54.871
- type: ndcg_at_3
value: 44.528
- type: ndcg_at_5
value: 46.211
- type: precision_at_1
value: 40.064
- type: precision_at_10
value: 9.178
- type: precision_at_100
value: 1.452
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 21.614
- type: precision_at_5
value: 15.185
- type: recall_at_1
value: 31.911
- type: recall_at_10
value: 58.155
- type: recall_at_100
value: 76.46300000000001
- type: recall_at_1000
value: 89.622
- type: recall_at_3
value: 46.195
- type: recall_at_5
value: 51.288999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.597
- type: map_at_10
value: 54.290000000000006
- type: map_at_100
value: 55.340999999999994
- type: map_at_1000
value: 55.388999999999996
- type: map_at_3
value: 50.931000000000004
- type: map_at_5
value: 52.839999999999996
- type: mrr_at_1
value: 46.646
- type: mrr_at_10
value: 57.524
- type: mrr_at_100
value: 58.225
- type: mrr_at_1000
value: 58.245999999999995
- type: mrr_at_3
value: 55.235
- type: mrr_at_5
value: 56.589
- type: ndcg_at_1
value: 46.646
- type: ndcg_at_10
value: 60.324999999999996
- type: ndcg_at_100
value: 64.30900000000001
- type: ndcg_at_1000
value: 65.19
- type: ndcg_at_3
value: 54.983000000000004
- type: ndcg_at_5
value: 57.621
- type: precision_at_1
value: 46.646
- type: precision_at_10
value: 9.774
- type: precision_at_100
value: 1.265
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 24.911
- type: precision_at_5
value: 16.977999999999998
- type: recall_at_1
value: 40.597
- type: recall_at_10
value: 74.773
- type: recall_at_100
value: 91.61200000000001
- type: recall_at_1000
value: 97.726
- type: recall_at_3
value: 60.458
- type: recall_at_5
value: 66.956
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.122
- type: map_at_10
value: 36.711
- type: map_at_100
value: 37.775
- type: map_at_1000
value: 37.842999999999996
- type: map_at_3
value: 33.693
- type: map_at_5
value: 35.607
- type: mrr_at_1
value: 29.153000000000002
- type: mrr_at_10
value: 38.873999999999995
- type: mrr_at_100
value: 39.739000000000004
- type: mrr_at_1000
value: 39.794000000000004
- type: mrr_at_3
value: 36.102000000000004
- type: mrr_at_5
value: 37.876
- type: ndcg_at_1
value: 29.153000000000002
- type: ndcg_at_10
value: 42.048
- type: ndcg_at_100
value: 47.144999999999996
- type: ndcg_at_1000
value: 48.901
- type: ndcg_at_3
value: 36.402
- type: ndcg_at_5
value: 39.562999999999995
- type: precision_at_1
value: 29.153000000000002
- type: precision_at_10
value: 6.4750000000000005
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 15.479999999999999
- type: precision_at_5
value: 11.028
- type: recall_at_1
value: 27.122
- type: recall_at_10
value: 56.279999999999994
- type: recall_at_100
value: 79.597
- type: recall_at_1000
value: 92.804
- type: recall_at_3
value: 41.437000000000005
- type: recall_at_5
value: 49.019
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.757
- type: map_at_10
value: 26.739
- type: map_at_100
value: 28.015
- type: map_at_1000
value: 28.127999999999997
- type: map_at_3
value: 23.986
- type: map_at_5
value: 25.514
- type: mrr_at_1
value: 22.015
- type: mrr_at_10
value: 31.325999999999997
- type: mrr_at_100
value: 32.368
- type: mrr_at_1000
value: 32.426
- type: mrr_at_3
value: 28.897000000000002
- type: mrr_at_5
value: 30.147000000000002
- type: ndcg_at_1
value: 22.015
- type: ndcg_at_10
value: 32.225
- type: ndcg_at_100
value: 38.405
- type: ndcg_at_1000
value: 40.932
- type: ndcg_at_3
value: 27.403
- type: ndcg_at_5
value: 29.587000000000003
- type: precision_at_1
value: 22.015
- type: precision_at_10
value: 5.9830000000000005
- type: precision_at_100
value: 1.051
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 13.391
- type: precision_at_5
value: 9.602
- type: recall_at_1
value: 17.757
- type: recall_at_10
value: 44.467
- type: recall_at_100
value: 71.53699999999999
- type: recall_at_1000
value: 89.281
- type: recall_at_3
value: 31.095
- type: recall_at_5
value: 36.818
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.354
- type: map_at_10
value: 42.134
- type: map_at_100
value: 43.429
- type: map_at_1000
value: 43.532
- type: map_at_3
value: 38.491
- type: map_at_5
value: 40.736
- type: mrr_at_1
value: 37.247
- type: mrr_at_10
value: 47.775
- type: mrr_at_100
value: 48.522999999999996
- type: mrr_at_1000
value: 48.567
- type: mrr_at_3
value: 45.059
- type: mrr_at_5
value: 46.811
- type: ndcg_at_1
value: 37.247
- type: ndcg_at_10
value: 48.609
- type: ndcg_at_100
value: 53.782
- type: ndcg_at_1000
value: 55.666000000000004
- type: ndcg_at_3
value: 42.866
- type: ndcg_at_5
value: 46.001
- type: precision_at_1
value: 37.247
- type: precision_at_10
value: 8.892999999999999
- type: precision_at_100
value: 1.341
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 20.5
- type: precision_at_5
value: 14.976
- type: recall_at_1
value: 30.354
- type: recall_at_10
value: 62.273
- type: recall_at_100
value: 83.65599999999999
- type: recall_at_1000
value: 95.82000000000001
- type: recall_at_3
value: 46.464
- type: recall_at_5
value: 54.225
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.949
- type: map_at_10
value: 37.230000000000004
- type: map_at_100
value: 38.644
- type: map_at_1000
value: 38.751999999999995
- type: map_at_3
value: 33.816
- type: map_at_5
value: 35.817
- type: mrr_at_1
value: 33.446999999999996
- type: mrr_at_10
value: 42.970000000000006
- type: mrr_at_100
value: 43.873
- type: mrr_at_1000
value: 43.922
- type: mrr_at_3
value: 40.467999999999996
- type: mrr_at_5
value: 41.861
- type: ndcg_at_1
value: 33.446999999999996
- type: ndcg_at_10
value: 43.403000000000006
- type: ndcg_at_100
value: 49.247
- type: ndcg_at_1000
value: 51.361999999999995
- type: ndcg_at_3
value: 38.155
- type: ndcg_at_5
value: 40.643
- type: precision_at_1
value: 33.446999999999996
- type: precision_at_10
value: 8.128
- type: precision_at_100
value: 1.274
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.493000000000002
- type: precision_at_5
value: 13.333
- type: recall_at_1
value: 26.949
- type: recall_at_10
value: 56.006
- type: recall_at_100
value: 80.99199999999999
- type: recall_at_1000
value: 95.074
- type: recall_at_3
value: 40.809
- type: recall_at_5
value: 47.57
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.243583333333333
- type: map_at_10
value: 37.193250000000006
- type: map_at_100
value: 38.44833333333334
- type: map_at_1000
value: 38.56083333333333
- type: map_at_3
value: 34.06633333333333
- type: map_at_5
value: 35.87858333333334
- type: mrr_at_1
value: 32.291583333333335
- type: mrr_at_10
value: 41.482749999999996
- type: mrr_at_100
value: 42.33583333333333
- type: mrr_at_1000
value: 42.38683333333333
- type: mrr_at_3
value: 38.952999999999996
- type: mrr_at_5
value: 40.45333333333333
- type: ndcg_at_1
value: 32.291583333333335
- type: ndcg_at_10
value: 42.90533333333334
- type: ndcg_at_100
value: 48.138666666666666
- type: ndcg_at_1000
value: 50.229083333333335
- type: ndcg_at_3
value: 37.76133333333334
- type: ndcg_at_5
value: 40.31033333333334
- type: precision_at_1
value: 32.291583333333335
- type: precision_at_10
value: 7.585583333333333
- type: precision_at_100
value: 1.2045000000000001
- type: precision_at_1000
value: 0.15733333333333335
- type: precision_at_3
value: 17.485416666666666
- type: precision_at_5
value: 12.5145
- type: recall_at_1
value: 27.243583333333333
- type: recall_at_10
value: 55.45108333333334
- type: recall_at_100
value: 78.25858333333335
- type: recall_at_1000
value: 92.61716666666665
- type: recall_at_3
value: 41.130583333333334
- type: recall_at_5
value: 47.73133333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.325
- type: map_at_10
value: 32.795
- type: map_at_100
value: 33.96
- type: map_at_1000
value: 34.054
- type: map_at_3
value: 30.64
- type: map_at_5
value: 31.771
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 35.83
- type: mrr_at_100
value: 36.868
- type: mrr_at_1000
value: 36.928
- type: mrr_at_3
value: 33.896
- type: mrr_at_5
value: 34.893
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 36.746
- type: ndcg_at_100
value: 42.225
- type: ndcg_at_1000
value: 44.523
- type: ndcg_at_3
value: 32.82
- type: ndcg_at_5
value: 34.583000000000006
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.6129999999999995
- type: precision_at_100
value: 0.9079999999999999
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 13.753000000000002
- type: precision_at_5
value: 9.417
- type: recall_at_1
value: 26.325
- type: recall_at_10
value: 45.975
- type: recall_at_100
value: 70.393
- type: recall_at_1000
value: 87.217
- type: recall_at_3
value: 35.195
- type: recall_at_5
value: 39.69
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.828
- type: map_at_10
value: 25.759
- type: map_at_100
value: 26.961000000000002
- type: map_at_1000
value: 27.094
- type: map_at_3
value: 23.166999999999998
- type: map_at_5
value: 24.610000000000003
- type: mrr_at_1
value: 21.61
- type: mrr_at_10
value: 29.605999999999998
- type: mrr_at_100
value: 30.586000000000002
- type: mrr_at_1000
value: 30.664
- type: mrr_at_3
value: 27.214
- type: mrr_at_5
value: 28.571
- type: ndcg_at_1
value: 21.61
- type: ndcg_at_10
value: 30.740000000000002
- type: ndcg_at_100
value: 36.332
- type: ndcg_at_1000
value: 39.296
- type: ndcg_at_3
value: 26.11
- type: ndcg_at_5
value: 28.297
- type: precision_at_1
value: 21.61
- type: precision_at_10
value: 5.643
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 12.4
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 17.828
- type: recall_at_10
value: 41.876000000000005
- type: recall_at_100
value: 66.648
- type: recall_at_1000
value: 87.763
- type: recall_at_3
value: 28.957
- type: recall_at_5
value: 34.494
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.921000000000003
- type: map_at_10
value: 37.156
- type: map_at_100
value: 38.399
- type: map_at_1000
value: 38.498
- type: map_at_3
value: 34.134
- type: map_at_5
value: 35.936
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.19
- type: mrr_at_100
value: 42.102000000000004
- type: mrr_at_1000
value: 42.157
- type: mrr_at_3
value: 38.464
- type: mrr_at_5
value: 40.148
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.679
- type: ndcg_at_100
value: 48.27
- type: ndcg_at_1000
value: 50.312
- type: ndcg_at_3
value: 37.269000000000005
- type: ndcg_at_5
value: 40.055
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.155
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 16.791
- type: precision_at_5
value: 12.015
- type: recall_at_1
value: 27.921000000000003
- type: recall_at_10
value: 55.357
- type: recall_at_100
value: 79.476
- type: recall_at_1000
value: 93.314
- type: recall_at_3
value: 40.891
- type: recall_at_5
value: 47.851
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.524
- type: map_at_10
value: 35.135
- type: map_at_100
value: 36.665
- type: map_at_1000
value: 36.886
- type: map_at_3
value: 31.367
- type: map_at_5
value: 33.724
- type: mrr_at_1
value: 30.631999999999998
- type: mrr_at_10
value: 39.616
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.585
- type: mrr_at_3
value: 36.462
- type: mrr_at_5
value: 38.507999999999996
- type: ndcg_at_1
value: 30.631999999999998
- type: ndcg_at_10
value: 41.61
- type: ndcg_at_100
value: 47.249
- type: ndcg_at_1000
value: 49.662
- type: ndcg_at_3
value: 35.421
- type: ndcg_at_5
value: 38.811
- type: precision_at_1
value: 30.631999999999998
- type: precision_at_10
value: 8.123
- type: precision_at_100
value: 1.5810000000000002
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 16.337
- type: precision_at_5
value: 12.568999999999999
- type: recall_at_1
value: 25.524
- type: recall_at_10
value: 54.994
- type: recall_at_100
value: 80.03099999999999
- type: recall_at_1000
value: 95.25099999999999
- type: recall_at_3
value: 37.563
- type: recall_at_5
value: 46.428999999999995
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.224
- type: map_at_10
value: 30.599999999999998
- type: map_at_100
value: 31.526
- type: map_at_1000
value: 31.629
- type: map_at_3
value: 27.491
- type: map_at_5
value: 29.212
- type: mrr_at_1
value: 24.214
- type: mrr_at_10
value: 32.632
- type: mrr_at_100
value: 33.482
- type: mrr_at_1000
value: 33.550000000000004
- type: mrr_at_3
value: 29.852
- type: mrr_at_5
value: 31.451
- type: ndcg_at_1
value: 24.214
- type: ndcg_at_10
value: 35.802
- type: ndcg_at_100
value: 40.502
- type: ndcg_at_1000
value: 43.052
- type: ndcg_at_3
value: 29.847
- type: ndcg_at_5
value: 32.732
- type: precision_at_1
value: 24.214
- type: precision_at_10
value: 5.804
- type: precision_at_100
value: 0.885
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.692999999999998
- type: precision_at_5
value: 9.242
- type: recall_at_1
value: 22.224
- type: recall_at_10
value: 49.849
- type: recall_at_100
value: 71.45
- type: recall_at_1000
value: 90.583
- type: recall_at_3
value: 34.153
- type: recall_at_5
value: 41.004000000000005
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.386999999999999
- type: map_at_10
value: 20.182
- type: map_at_100
value: 21.86
- type: map_at_1000
value: 22.054000000000002
- type: map_at_3
value: 17.165
- type: map_at_5
value: 18.643
- type: mrr_at_1
value: 26.906000000000002
- type: mrr_at_10
value: 37.907999999999994
- type: mrr_at_100
value: 38.868
- type: mrr_at_1000
value: 38.913
- type: mrr_at_3
value: 34.853
- type: mrr_at_5
value: 36.567
- type: ndcg_at_1
value: 26.906000000000002
- type: ndcg_at_10
value: 28.103
- type: ndcg_at_100
value: 35.073
- type: ndcg_at_1000
value: 38.653
- type: ndcg_at_3
value: 23.345
- type: ndcg_at_5
value: 24.828
- type: precision_at_1
value: 26.906000000000002
- type: precision_at_10
value: 8.547
- type: precision_at_100
value: 1.617
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 17.025000000000002
- type: precision_at_5
value: 12.834000000000001
- type: recall_at_1
value: 12.386999999999999
- type: recall_at_10
value: 33.306999999999995
- type: recall_at_100
value: 57.516
- type: recall_at_1000
value: 77.74799999999999
- type: recall_at_3
value: 21.433
- type: recall_at_5
value: 25.915
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.322
- type: map_at_10
value: 20.469
- type: map_at_100
value: 28.638
- type: map_at_1000
value: 30.433
- type: map_at_3
value: 14.802000000000001
- type: map_at_5
value: 17.297
- type: mrr_at_1
value: 68.75
- type: mrr_at_10
value: 76.29599999999999
- type: mrr_at_100
value: 76.62400000000001
- type: mrr_at_1000
value: 76.633
- type: mrr_at_3
value: 75.083
- type: mrr_at_5
value: 75.771
- type: ndcg_at_1
value: 54.87499999999999
- type: ndcg_at_10
value: 41.185
- type: ndcg_at_100
value: 46.400000000000006
- type: ndcg_at_1000
value: 54.223
- type: ndcg_at_3
value: 45.489000000000004
- type: ndcg_at_5
value: 43.161
- type: precision_at_1
value: 68.75
- type: precision_at_10
value: 32.300000000000004
- type: precision_at_100
value: 10.607999999999999
- type: precision_at_1000
value: 2.237
- type: precision_at_3
value: 49.083
- type: precision_at_5
value: 41.6
- type: recall_at_1
value: 9.322
- type: recall_at_10
value: 25.696
- type: recall_at_100
value: 52.898
- type: recall_at_1000
value: 77.281
- type: recall_at_3
value: 15.943
- type: recall_at_5
value: 19.836000000000002
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.650000000000006
- type: f1
value: 43.528467245539396
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.56
- type: map_at_10
value: 76.767
- type: map_at_100
value: 77.054
- type: map_at_1000
value: 77.068
- type: map_at_3
value: 75.29299999999999
- type: map_at_5
value: 76.24
- type: mrr_at_1
value: 71.842
- type: mrr_at_10
value: 81.459
- type: mrr_at_100
value: 81.58800000000001
- type: mrr_at_1000
value: 81.59100000000001
- type: mrr_at_3
value: 80.188
- type: mrr_at_5
value: 81.038
- type: ndcg_at_1
value: 71.842
- type: ndcg_at_10
value: 81.51899999999999
- type: ndcg_at_100
value: 82.544
- type: ndcg_at_1000
value: 82.829
- type: ndcg_at_3
value: 78.92
- type: ndcg_at_5
value: 80.406
- type: precision_at_1
value: 71.842
- type: precision_at_10
value: 10.066
- type: precision_at_100
value: 1.076
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.703000000000003
- type: precision_at_5
value: 19.301
- type: recall_at_1
value: 66.56
- type: recall_at_10
value: 91.55
- type: recall_at_100
value: 95.67099999999999
- type: recall_at_1000
value: 97.539
- type: recall_at_3
value: 84.46900000000001
- type: recall_at_5
value: 88.201
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.087
- type: map_at_10
value: 32.830999999999996
- type: map_at_100
value: 34.814
- type: map_at_1000
value: 34.999
- type: map_at_3
value: 28.198
- type: map_at_5
value: 30.779
- type: mrr_at_1
value: 38.889
- type: mrr_at_10
value: 48.415
- type: mrr_at_100
value: 49.187
- type: mrr_at_1000
value: 49.226
- type: mrr_at_3
value: 45.705
- type: mrr_at_5
value: 47.225
- type: ndcg_at_1
value: 38.889
- type: ndcg_at_10
value: 40.758
- type: ndcg_at_100
value: 47.671
- type: ndcg_at_1000
value: 50.744
- type: ndcg_at_3
value: 36.296
- type: ndcg_at_5
value: 37.852999999999994
- type: precision_at_1
value: 38.889
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.8499999999999999
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 24.126
- type: precision_at_5
value: 18.21
- type: recall_at_1
value: 20.087
- type: recall_at_10
value: 48.042
- type: recall_at_100
value: 73.493
- type: recall_at_1000
value: 91.851
- type: recall_at_3
value: 32.694
- type: recall_at_5
value: 39.099000000000004
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 56.99999999999999
- type: map_at_100
value: 57.914
- type: map_at_1000
value: 57.984
- type: map_at_3
value: 53.900999999999996
- type: map_at_5
value: 55.827000000000005
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.955
- type: mrr_at_100
value: 82.164
- type: mrr_at_1000
value: 82.173
- type: mrr_at_3
value: 80.963
- type: mrr_at_5
value: 81.574
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 65.75
- type: ndcg_at_100
value: 68.949
- type: ndcg_at_1000
value: 70.342
- type: ndcg_at_3
value: 61.29
- type: ndcg_at_5
value: 63.747
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 13.571
- type: precision_at_100
value: 1.6070000000000002
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 38.663
- type: precision_at_5
value: 25.136999999999997
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 67.853
- type: recall_at_100
value: 80.365
- type: recall_at_1000
value: 89.629
- type: recall_at_3
value: 57.995
- type: recall_at_5
value: 62.843
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.95200000000001
- type: ap
value: 80.73847277002109
- type: f1
value: 85.92406135678594
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.916999999999998
- type: map_at_10
value: 33.23
- type: map_at_100
value: 34.427
- type: map_at_1000
value: 34.477000000000004
- type: map_at_3
value: 29.292
- type: map_at_5
value: 31.6
- type: mrr_at_1
value: 21.547
- type: mrr_at_10
value: 33.839999999999996
- type: mrr_at_100
value: 34.979
- type: mrr_at_1000
value: 35.022999999999996
- type: mrr_at_3
value: 29.988
- type: mrr_at_5
value: 32.259
- type: ndcg_at_1
value: 21.519
- type: ndcg_at_10
value: 40.209
- type: ndcg_at_100
value: 45.954
- type: ndcg_at_1000
value: 47.187
- type: ndcg_at_3
value: 32.227
- type: ndcg_at_5
value: 36.347
- type: precision_at_1
value: 21.519
- type: precision_at_10
value: 6.447
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.877999999999998
- type: precision_at_5
value: 10.404
- type: recall_at_1
value: 20.916999999999998
- type: recall_at_10
value: 61.7
- type: recall_at_100
value: 88.202
- type: recall_at_1000
value: 97.588
- type: recall_at_3
value: 40.044999999999995
- type: recall_at_5
value: 49.964999999999996
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.02781577747379
- type: f1
value: 92.83653922768306
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.04286365709075
- type: f1
value: 53.43867658525793
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.47276395427035
- type: f1
value: 69.77017399597342
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.3819771351715
- type: f1
value: 76.8484533435409
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.16515993299593
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.77145323314774
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.53637706586391
- type: mrr
value: 33.7312926288863
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.063999999999999
- type: map_at_10
value: 15.046999999999999
- type: map_at_100
value: 19.116
- type: map_at_1000
value: 20.702
- type: map_at_3
value: 10.932
- type: map_at_5
value: 12.751999999999999
- type: mrr_at_1
value: 50.464
- type: mrr_at_10
value: 58.189
- type: mrr_at_100
value: 58.733999999999995
- type: mrr_at_1000
value: 58.769000000000005
- type: mrr_at_3
value: 56.24400000000001
- type: mrr_at_5
value: 57.68299999999999
- type: ndcg_at_1
value: 48.142
- type: ndcg_at_10
value: 37.897
- type: ndcg_at_100
value: 35.264
- type: ndcg_at_1000
value: 44.033
- type: ndcg_at_3
value: 42.967
- type: ndcg_at_5
value: 40.815
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 28.235
- type: precision_at_100
value: 8.994
- type: precision_at_1000
value: 2.218
- type: precision_at_3
value: 40.041
- type: precision_at_5
value: 35.046
- type: recall_at_1
value: 7.063999999999999
- type: recall_at_10
value: 18.598
- type: recall_at_100
value: 35.577999999999996
- type: recall_at_1000
value: 67.43
- type: recall_at_3
value: 11.562999999999999
- type: recall_at_5
value: 14.771
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.046
- type: map_at_10
value: 44.808
- type: map_at_100
value: 45.898
- type: map_at_1000
value: 45.927
- type: map_at_3
value: 40.19
- type: map_at_5
value: 42.897
- type: mrr_at_1
value: 32.706
- type: mrr_at_10
value: 47.275
- type: mrr_at_100
value: 48.075
- type: mrr_at_1000
value: 48.095
- type: mrr_at_3
value: 43.463
- type: mrr_at_5
value: 45.741
- type: ndcg_at_1
value: 32.706
- type: ndcg_at_10
value: 52.835
- type: ndcg_at_100
value: 57.345
- type: ndcg_at_1000
value: 57.985
- type: ndcg_at_3
value: 44.171
- type: ndcg_at_5
value: 48.661
- type: precision_at_1
value: 32.706
- type: precision_at_10
value: 8.895999999999999
- type: precision_at_100
value: 1.143
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.238999999999997
- type: precision_at_5
value: 14.728
- type: recall_at_1
value: 29.046
- type: recall_at_10
value: 74.831
- type: recall_at_100
value: 94.192
- type: recall_at_1000
value: 98.897
- type: recall_at_3
value: 52.37500000000001
- type: recall_at_5
value: 62.732
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.38799999999999
- type: map_at_10
value: 84.315
- type: map_at_100
value: 84.955
- type: map_at_1000
value: 84.971
- type: map_at_3
value: 81.33399999999999
- type: map_at_5
value: 83.21300000000001
- type: mrr_at_1
value: 81.03
- type: mrr_at_10
value: 87.395
- type: mrr_at_100
value: 87.488
- type: mrr_at_1000
value: 87.48899999999999
- type: mrr_at_3
value: 86.41499999999999
- type: mrr_at_5
value: 87.074
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.151
- type: ndcg_at_100
value: 89.38199999999999
- type: ndcg_at_1000
value: 89.479
- type: ndcg_at_3
value: 85.24000000000001
- type: ndcg_at_5
value: 86.856
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.372
- type: precision_at_100
value: 1.526
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.217
- type: precision_at_5
value: 24.502
- type: recall_at_1
value: 70.38799999999999
- type: recall_at_10
value: 95.452
- type: recall_at_100
value: 99.59700000000001
- type: recall_at_1000
value: 99.988
- type: recall_at_3
value: 87.11
- type: recall_at_5
value: 91.662
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 59.334991029213235
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.586500854616666
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.153
- type: map_at_10
value: 14.277000000000001
- type: map_at_100
value: 16.922
- type: map_at_1000
value: 17.302999999999997
- type: map_at_3
value: 9.961
- type: map_at_5
value: 12.257
- type: mrr_at_1
value: 25.4
- type: mrr_at_10
value: 37.458000000000006
- type: mrr_at_100
value: 38.681
- type: mrr_at_1000
value: 38.722
- type: mrr_at_3
value: 34.1
- type: mrr_at_5
value: 36.17
- type: ndcg_at_1
value: 25.4
- type: ndcg_at_10
value: 23.132
- type: ndcg_at_100
value: 32.908
- type: ndcg_at_1000
value: 38.754
- type: ndcg_at_3
value: 21.82
- type: ndcg_at_5
value: 19.353
- type: precision_at_1
value: 25.4
- type: precision_at_10
value: 12.1
- type: precision_at_100
value: 2.628
- type: precision_at_1000
value: 0.402
- type: precision_at_3
value: 20.732999999999997
- type: precision_at_5
value: 17.34
- type: recall_at_1
value: 5.153
- type: recall_at_10
value: 24.54
- type: recall_at_100
value: 53.293
- type: recall_at_1000
value: 81.57
- type: recall_at_3
value: 12.613
- type: recall_at_5
value: 17.577
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.86284404925333
- type: cos_sim_spearman
value: 78.85870555294795
- type: euclidean_pearson
value: 82.20105295276093
- type: euclidean_spearman
value: 78.92125617009592
- type: manhattan_pearson
value: 82.15840025289069
- type: manhattan_spearman
value: 78.85955732900803
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.98747423389027
- type: cos_sim_spearman
value: 75.71298531799367
- type: euclidean_pearson
value: 81.59709559192291
- type: euclidean_spearman
value: 75.40622749225653
- type: manhattan_pearson
value: 81.55553547608804
- type: manhattan_spearman
value: 75.39380235424899
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.76861330695503
- type: cos_sim_spearman
value: 85.72991921531624
- type: euclidean_pearson
value: 84.84504307397536
- type: euclidean_spearman
value: 86.02679162824732
- type: manhattan_pearson
value: 84.79969439220142
- type: manhattan_spearman
value: 85.99238837291625
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.31929747511796
- type: cos_sim_spearman
value: 81.50806522502528
- type: euclidean_pearson
value: 82.93936686512777
- type: euclidean_spearman
value: 81.54403447993224
- type: manhattan_pearson
value: 82.89696981900828
- type: manhattan_spearman
value: 81.52817825470865
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.14413295332908
- type: cos_sim_spearman
value: 88.81032027008195
- type: euclidean_pearson
value: 88.19205563407645
- type: euclidean_spearman
value: 88.89738339479216
- type: manhattan_pearson
value: 88.11075942004189
- type: manhattan_spearman
value: 88.8297061675564
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.15980075557017
- type: cos_sim_spearman
value: 83.81896308594801
- type: euclidean_pearson
value: 83.11195254311338
- type: euclidean_spearman
value: 84.10479481755407
- type: manhattan_pearson
value: 83.13915225100556
- type: manhattan_spearman
value: 84.09895591027859
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.93669480147919
- type: cos_sim_spearman
value: 87.89861394614361
- type: euclidean_pearson
value: 88.37316413202339
- type: euclidean_spearman
value: 88.18033817842569
- type: manhattan_pearson
value: 88.39427578879469
- type: manhattan_spearman
value: 88.09185009236847
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.62215083348255
- type: cos_sim_spearman
value: 67.33243665716736
- type: euclidean_pearson
value: 67.60871701996284
- type: euclidean_spearman
value: 66.75929225238659
- type: manhattan_pearson
value: 67.63907838970992
- type: manhattan_spearman
value: 66.79313656754846
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.65549191934764
- type: cos_sim_spearman
value: 85.73266847750143
- type: euclidean_pearson
value: 85.75609932254318
- type: euclidean_spearman
value: 85.9452287759371
- type: manhattan_pearson
value: 85.69717413063573
- type: manhattan_spearman
value: 85.86546318377046
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.08164129085783
- type: mrr
value: 96.2877273416489
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.09400000000001
- type: map_at_10
value: 71.712
- type: map_at_100
value: 72.128
- type: map_at_1000
value: 72.14399999999999
- type: map_at_3
value: 68.93
- type: map_at_5
value: 70.694
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 72.572
- type: mrr_at_100
value: 72.842
- type: mrr_at_1000
value: 72.856
- type: mrr_at_3
value: 70.44399999999999
- type: mrr_at_5
value: 71.744
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 76.178
- type: ndcg_at_100
value: 77.887
- type: ndcg_at_1000
value: 78.227
- type: ndcg_at_3
value: 71.367
- type: ndcg_at_5
value: 73.938
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.033
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 27.667
- type: precision_at_5
value: 18.4
- type: recall_at_1
value: 62.09400000000001
- type: recall_at_10
value: 89.022
- type: recall_at_100
value: 96.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 75.922
- type: recall_at_5
value: 82.428
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82178217821782
- type: cos_sim_ap
value: 95.71282508220798
- type: cos_sim_f1
value: 90.73120494335737
- type: cos_sim_precision
value: 93.52441613588111
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.73960396039604
- type: dot_ap
value: 92.98534606529098
- type: dot_f1
value: 86.83024536805209
- type: dot_precision
value: 86.96088264794383
- type: dot_recall
value: 86.7
- type: euclidean_accuracy
value: 99.82475247524752
- type: euclidean_ap
value: 95.72927039014849
- type: euclidean_f1
value: 90.89974293059126
- type: euclidean_precision
value: 93.54497354497354
- type: euclidean_recall
value: 88.4
- type: manhattan_accuracy
value: 99.82574257425742
- type: manhattan_ap
value: 95.72142177390405
- type: manhattan_f1
value: 91.00152516522625
- type: manhattan_precision
value: 92.55429162357808
- type: manhattan_recall
value: 89.5
- type: max_accuracy
value: 99.82574257425742
- type: max_ap
value: 95.72927039014849
- type: max_f1
value: 91.00152516522625
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.63957663468679
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.003307257923964
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.005825525863905
- type: mrr
value: 53.854683919022165
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.503611569974098
- type: cos_sim_spearman
value: 31.17155564248449
- type: dot_pearson
value: 26.740428413981306
- type: dot_spearman
value: 26.55727635469746
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23600000000000002
- type: map_at_10
value: 1.7670000000000001
- type: map_at_100
value: 10.208
- type: map_at_1000
value: 25.997999999999998
- type: map_at_3
value: 0.605
- type: map_at_5
value: 0.9560000000000001
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 90.167
- type: mrr_at_100
value: 90.167
- type: mrr_at_1000
value: 90.167
- type: mrr_at_3
value: 89.667
- type: mrr_at_5
value: 90.167
- type: ndcg_at_1
value: 77.0
- type: ndcg_at_10
value: 68.783
- type: ndcg_at_100
value: 54.196
- type: ndcg_at_1000
value: 52.077
- type: ndcg_at_3
value: 71.642
- type: ndcg_at_5
value: 70.45700000000001
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 73.0
- type: precision_at_100
value: 55.48
- type: precision_at_1000
value: 23.102
- type: precision_at_3
value: 76.0
- type: precision_at_5
value: 74.8
- type: recall_at_1
value: 0.23600000000000002
- type: recall_at_10
value: 1.9869999999999999
- type: recall_at_100
value: 13.749
- type: recall_at_1000
value: 50.157
- type: recall_at_3
value: 0.633
- type: recall_at_5
value: 1.0290000000000001
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.437
- type: map_at_10
value: 8.791
- type: map_at_100
value: 15.001999999999999
- type: map_at_1000
value: 16.549
- type: map_at_3
value: 3.8080000000000003
- type: map_at_5
value: 5.632000000000001
- type: mrr_at_1
value: 20.408
- type: mrr_at_10
value: 36.96
- type: mrr_at_100
value: 37.912
- type: mrr_at_1000
value: 37.912
- type: mrr_at_3
value: 29.592000000000002
- type: mrr_at_5
value: 34.489999999999995
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 22.554
- type: ndcg_at_100
value: 35.197
- type: ndcg_at_1000
value: 46.58
- type: ndcg_at_3
value: 20.285
- type: ndcg_at_5
value: 21.924
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 21.837
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 1.537
- type: precision_at_3
value: 21.769
- type: precision_at_5
value: 23.673
- type: recall_at_1
value: 1.437
- type: recall_at_10
value: 16.314999999999998
- type: recall_at_100
value: 47.635
- type: recall_at_1000
value: 82.963
- type: recall_at_3
value: 4.955
- type: recall_at_5
value: 8.805
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.6128
- type: ap
value: 14.279639861175664
- type: f1
value: 54.922292491204274
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.01188455008489
- type: f1
value: 57.377953019225515
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 52.306769136544254
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.64701674912082
- type: cos_sim_ap
value: 72.46600945328552
- type: cos_sim_f1
value: 67.96572367648784
- type: cos_sim_precision
value: 61.21801649397336
- type: cos_sim_recall
value: 76.38522427440633
- type: dot_accuracy
value: 82.33295583238957
- type: dot_ap
value: 62.54843443071716
- type: dot_f1
value: 60.38378562507096
- type: dot_precision
value: 52.99980067769583
- type: dot_recall
value: 70.15831134564644
- type: euclidean_accuracy
value: 85.7423854085951
- type: euclidean_ap
value: 72.76873850945174
- type: euclidean_f1
value: 68.23556960543262
- type: euclidean_precision
value: 61.3344559040202
- type: euclidean_recall
value: 76.88654353562005
- type: manhattan_accuracy
value: 85.74834594981225
- type: manhattan_ap
value: 72.66825372446462
- type: manhattan_f1
value: 68.21539194662853
- type: manhattan_precision
value: 62.185056472632496
- type: manhattan_recall
value: 75.54089709762533
- type: max_accuracy
value: 85.74834594981225
- type: max_ap
value: 72.76873850945174
- type: max_f1
value: 68.23556960543262
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.73171110334924
- type: cos_sim_ap
value: 85.51855542063649
- type: cos_sim_f1
value: 77.95706775700934
- type: cos_sim_precision
value: 74.12524298805887
- type: cos_sim_recall
value: 82.20665229442562
- type: dot_accuracy
value: 86.94842240074514
- type: dot_ap
value: 80.90995345771762
- type: dot_f1
value: 74.20765027322403
- type: dot_precision
value: 70.42594385285575
- type: dot_recall
value: 78.41854019094548
- type: euclidean_accuracy
value: 88.73753250281368
- type: euclidean_ap
value: 85.54712254033734
- type: euclidean_f1
value: 78.07565728654365
- type: euclidean_precision
value: 75.1120597652081
- type: euclidean_recall
value: 81.282722513089
- type: manhattan_accuracy
value: 88.72588970388482
- type: manhattan_ap
value: 85.52118291594071
- type: manhattan_f1
value: 78.04428724070593
- type: manhattan_precision
value: 74.83219105490002
- type: manhattan_recall
value: 81.54450261780106
- type: max_accuracy
value: 88.73753250281368
- type: max_ap
value: 85.54712254033734
- type: max_f1
value: 78.07565728654365
language:
- en
license: mit
---
# gte-base
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Metrics
We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
| [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
| [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
## Usage
Code example
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-base")
model = AutoModel.from_pretrained("thenlper/gte-base")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('thenlper/gte-base')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
### Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@misc{li2023general,
title={Towards General Text Embeddings with Multi-stage Contrastive Learning},
author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang},
year={2023},
eprint={2308.03281},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
soBeauty/20231008-9-distilbert-base-multilingual-cased-new
|
soBeauty
| 2023-10-09T14:31:39Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-09T10:09:27Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 20231008-9-distilbert-base-multilingual-cased-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20231008-9-distilbert-base-multilingual-cased-new
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5609
- Loss: 1.8042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 3.0492 | 1.82 | 200 | 0.3888 | 2.6521 |
| 2.589 | 3.64 | 400 | 0.4582 | 2.4219 |
| 2.35 | 5.45 | 600 | 0.4440 | 2.5098 |
| 2.2866 | 7.27 | 800 | 0.4607 | 2.2587 |
| 2.1603 | 9.09 | 1000 | 0.5060 | 2.1101 |
| 2.0746 | 10.91 | 1200 | 0.5040 | 2.2520 |
| 1.9926 | 12.73 | 1400 | 0.5512 | 1.8800 |
| 1.9296 | 14.55 | 1600 | 0.5682 | 1.8248 |
| 1.9172 | 16.36 | 1800 | 0.56 | 1.9773 |
| 1.8996 | 18.18 | 2000 | 0.5735 | 1.8932 |
| 1.8936 | 20.0 | 2200 | 0.5609 | 1.8042 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
digvijayky/hier_vq_vae_st
|
digvijayky
| 2023-10-09T14:31:35Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-08-21T16:24:09Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Hierarchical VQ VAE model for analysis of imaging-based ST data
<!-- Provide a quick summary of what the model is/does. -->
Pretrained checkpoints trained on Vizgen oncology ST dataset.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Dig Vijay Kumar Yarlagadda
- **Model type:** VQ-VAE
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/digvijayky/hier_vq_vae_st
- **Paper [optional]:** https://openaccess.thecvf.com/content/ICCV2023W/BIC/html/Yarlagadda_Discrete_Representation_Learning_for_Modeling_Imaging-Based_Spatial_Transcriptomics_Data_ICCVW_2023_paper.html
## Uses
Analysis of imaging-based spatial transcriptomics datasets such as MERFISH, SEQFISH and Xenium.
### Direct Use
Analysis of Vizgen oncology dataset.
### Downstream Use [optional]
Finetuning on any imaging-based ST data.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Analysis of spot-based ST data
Mapping of scRNA-seq to ST data
### Compute Infrastructure
Model is trained on 4 A100 GPUs.
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**APA:**
If you use this checkpoint or model, please cite:
Dig Vijay Kumar Yarlagadda, Joan Massague, Christina S. Leslie. "Discrete Representation Learning for Modeling Imaging-based Spatial Transcriptomics Data". International Conference on Computer Vision, Bioimaging Computing workshop 2023.
|
vectoriseai/gte-large
|
vectoriseai
| 2023-10-09T14:08:30Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"Sentence Transformers",
"en",
"arxiv:2308.03281",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-09T13:52:17Z |
---
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
model-index:
- name: gte-large
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.62686567164178
- type: ap
value: 34.46944126809772
- type: f1
value: 66.23684353950857
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.51805
- type: ap
value: 89.49842783330848
- type: f1
value: 92.51112169431808
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.074
- type: f1
value: 48.44785682572955
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.077
- type: map_at_10
value: 48.153
- type: map_at_100
value: 48.963
- type: map_at_1000
value: 48.966
- type: map_at_3
value: 43.184
- type: map_at_5
value: 46.072
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.54
- type: mrr_at_100
value: 49.335
- type: mrr_at_1000
value: 49.338
- type: mrr_at_3
value: 43.563
- type: mrr_at_5
value: 46.383
- type: ndcg_at_1
value: 32.077
- type: ndcg_at_10
value: 57.158
- type: ndcg_at_100
value: 60.324999999999996
- type: ndcg_at_1000
value: 60.402
- type: ndcg_at_3
value: 46.934
- type: ndcg_at_5
value: 52.158
- type: precision_at_1
value: 32.077
- type: precision_at_10
value: 8.591999999999999
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.275000000000002
- type: precision_at_5
value: 14.111
- type: recall_at_1
value: 32.077
- type: recall_at_10
value: 85.917
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 57.824
- type: recall_at_5
value: 70.555
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.619246083417295
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.3574067664688
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.06359661829253
- type: mrr
value: 76.15596007562766
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 90.25407547368691
- type: cos_sim_spearman
value: 88.65081514968477
- type: euclidean_pearson
value: 88.14857116664494
- type: euclidean_spearman
value: 88.50683596540692
- type: manhattan_pearson
value: 87.9654797992225
- type: manhattan_spearman
value: 88.21164851646908
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.05844155844157
- type: f1
value: 86.01555597681825
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10510519739522
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.84689960264385
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.800000000000004
- type: map_at_10
value: 44.857
- type: map_at_100
value: 46.512
- type: map_at_1000
value: 46.635
- type: map_at_3
value: 41.062
- type: map_at_5
value: 43.126
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 50.879
- type: mrr_at_100
value: 51.605000000000004
- type: mrr_at_1000
value: 51.641000000000005
- type: mrr_at_3
value: 48.14
- type: mrr_at_5
value: 49.835
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 51.819
- type: ndcg_at_100
value: 57.318999999999996
- type: ndcg_at_1000
value: 58.955999999999996
- type: ndcg_at_3
value: 46.409
- type: ndcg_at_5
value: 48.825
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 10.072000000000001
- type: precision_at_100
value: 1.625
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 22.556
- type: precision_at_5
value: 16.309
- type: recall_at_1
value: 32.800000000000004
- type: recall_at_10
value: 65.078
- type: recall_at_100
value: 87.491
- type: recall_at_1000
value: 97.514
- type: recall_at_3
value: 49.561
- type: recall_at_5
value: 56.135999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.614
- type: map_at_10
value: 43.578
- type: map_at_100
value: 44.897
- type: map_at_1000
value: 45.023
- type: map_at_3
value: 40.282000000000004
- type: map_at_5
value: 42.117
- type: mrr_at_1
value: 40.510000000000005
- type: mrr_at_10
value: 49.428
- type: mrr_at_100
value: 50.068999999999996
- type: mrr_at_1000
value: 50.111000000000004
- type: mrr_at_3
value: 47.176
- type: mrr_at_5
value: 48.583999999999996
- type: ndcg_at_1
value: 40.510000000000005
- type: ndcg_at_10
value: 49.478
- type: ndcg_at_100
value: 53.852
- type: ndcg_at_1000
value: 55.782
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 47.19
- type: precision_at_1
value: 40.510000000000005
- type: precision_at_10
value: 9.363000000000001
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 21.741
- type: precision_at_5
value: 15.465000000000002
- type: recall_at_1
value: 32.614
- type: recall_at_10
value: 59.782000000000004
- type: recall_at_100
value: 78.012
- type: recall_at_1000
value: 90.319
- type: recall_at_3
value: 46.825
- type: recall_at_5
value: 52.688
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.266000000000005
- type: map_at_10
value: 53.756
- type: map_at_100
value: 54.809
- type: map_at_1000
value: 54.855
- type: map_at_3
value: 50.073
- type: map_at_5
value: 52.293
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 57.116
- type: mrr_at_100
value: 57.767
- type: mrr_at_1000
value: 57.791000000000004
- type: mrr_at_3
value: 54.461999999999996
- type: mrr_at_5
value: 56.092
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 60.092
- type: ndcg_at_100
value: 64.034
- type: ndcg_at_1000
value: 64.937
- type: ndcg_at_3
value: 54.071000000000005
- type: ndcg_at_5
value: 57.254000000000005
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.799
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.368000000000002
- type: precision_at_5
value: 16.89
- type: recall_at_1
value: 40.266000000000005
- type: recall_at_10
value: 75.41499999999999
- type: recall_at_100
value: 92.01700000000001
- type: recall_at_1000
value: 98.379
- type: recall_at_3
value: 59.476
- type: recall_at_5
value: 67.297
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.589
- type: map_at_10
value: 37.755
- type: map_at_100
value: 38.881
- type: map_at_1000
value: 38.954
- type: map_at_3
value: 34.759
- type: map_at_5
value: 36.544
- type: mrr_at_1
value: 30.734
- type: mrr_at_10
value: 39.742
- type: mrr_at_100
value: 40.774
- type: mrr_at_1000
value: 40.824
- type: mrr_at_3
value: 37.137
- type: mrr_at_5
value: 38.719
- type: ndcg_at_1
value: 30.734
- type: ndcg_at_10
value: 42.978
- type: ndcg_at_100
value: 48.309000000000005
- type: ndcg_at_1000
value: 50.068
- type: ndcg_at_3
value: 37.361
- type: ndcg_at_5
value: 40.268
- type: precision_at_1
value: 30.734
- type: precision_at_10
value: 6.565
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 15.744
- type: precision_at_5
value: 11.096
- type: recall_at_1
value: 28.589
- type: recall_at_10
value: 57.126999999999995
- type: recall_at_100
value: 81.051
- type: recall_at_1000
value: 94.027
- type: recall_at_3
value: 42.045
- type: recall_at_5
value: 49.019
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.5
- type: map_at_10
value: 27.950999999999997
- type: map_at_100
value: 29.186
- type: map_at_1000
value: 29.298000000000002
- type: map_at_3
value: 25.141000000000002
- type: map_at_5
value: 26.848
- type: mrr_at_1
value: 22.637
- type: mrr_at_10
value: 32.572
- type: mrr_at_100
value: 33.472
- type: mrr_at_1000
value: 33.533
- type: mrr_at_3
value: 29.747
- type: mrr_at_5
value: 31.482
- type: ndcg_at_1
value: 22.637
- type: ndcg_at_10
value: 33.73
- type: ndcg_at_100
value: 39.568
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.505999999999997
- type: ndcg_at_5
value: 31.255
- type: precision_at_1
value: 22.637
- type: precision_at_10
value: 6.281000000000001
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 13.847000000000001
- type: precision_at_5
value: 10.224
- type: recall_at_1
value: 18.5
- type: recall_at_10
value: 46.744
- type: recall_at_100
value: 72.072
- type: recall_at_1000
value: 91.03999999999999
- type: recall_at_3
value: 32.551
- type: recall_at_5
value: 39.533
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.602
- type: map_at_10
value: 42.18
- type: map_at_100
value: 43.6
- type: map_at_1000
value: 43.704
- type: map_at_3
value: 38.413000000000004
- type: map_at_5
value: 40.626
- type: mrr_at_1
value: 37.344
- type: mrr_at_10
value: 47.638000000000005
- type: mrr_at_100
value: 48.485
- type: mrr_at_1000
value: 48.52
- type: mrr_at_3
value: 44.867000000000004
- type: mrr_at_5
value: 46.566
- type: ndcg_at_1
value: 37.344
- type: ndcg_at_10
value: 48.632
- type: ndcg_at_100
value: 54.215
- type: ndcg_at_1000
value: 55.981
- type: ndcg_at_3
value: 42.681999999999995
- type: ndcg_at_5
value: 45.732
- type: precision_at_1
value: 37.344
- type: precision_at_10
value: 8.932
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 20.276
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 30.602
- type: recall_at_10
value: 62.273
- type: recall_at_100
value: 85.12100000000001
- type: recall_at_1000
value: 96.439
- type: recall_at_3
value: 45.848
- type: recall_at_5
value: 53.615
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.952
- type: map_at_10
value: 35.177
- type: map_at_100
value: 36.59
- type: map_at_1000
value: 36.703
- type: map_at_3
value: 31.261
- type: map_at_5
value: 33.222
- type: mrr_at_1
value: 29.337999999999997
- type: mrr_at_10
value: 40.152
- type: mrr_at_100
value: 40.963
- type: mrr_at_1000
value: 41.016999999999996
- type: mrr_at_3
value: 36.91
- type: mrr_at_5
value: 38.685
- type: ndcg_at_1
value: 29.337999999999997
- type: ndcg_at_10
value: 41.994
- type: ndcg_at_100
value: 47.587
- type: ndcg_at_1000
value: 49.791000000000004
- type: ndcg_at_3
value: 35.27
- type: ndcg_at_5
value: 38.042
- type: precision_at_1
value: 29.337999999999997
- type: precision_at_10
value: 8.276
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 17.161
- type: precision_at_5
value: 12.671
- type: recall_at_1
value: 23.952
- type: recall_at_10
value: 57.267
- type: recall_at_100
value: 80.886
- type: recall_at_1000
value: 95.611
- type: recall_at_3
value: 38.622
- type: recall_at_5
value: 45.811
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.092083333333335
- type: map_at_10
value: 37.2925
- type: map_at_100
value: 38.57041666666666
- type: map_at_1000
value: 38.68141666666667
- type: map_at_3
value: 34.080000000000005
- type: map_at_5
value: 35.89958333333333
- type: mrr_at_1
value: 31.94758333333333
- type: mrr_at_10
value: 41.51049999999999
- type: mrr_at_100
value: 42.36099999999999
- type: mrr_at_1000
value: 42.4125
- type: mrr_at_3
value: 38.849583333333335
- type: mrr_at_5
value: 40.448249999999994
- type: ndcg_at_1
value: 31.94758333333333
- type: ndcg_at_10
value: 43.17633333333333
- type: ndcg_at_100
value: 48.45241666666668
- type: ndcg_at_1000
value: 50.513999999999996
- type: ndcg_at_3
value: 37.75216666666667
- type: ndcg_at_5
value: 40.393833333333326
- type: precision_at_1
value: 31.94758333333333
- type: precision_at_10
value: 7.688916666666666
- type: precision_at_100
value: 1.2250833333333333
- type: precision_at_1000
value: 0.1595
- type: precision_at_3
value: 17.465999999999998
- type: precision_at_5
value: 12.548083333333333
- type: recall_at_1
value: 27.092083333333335
- type: recall_at_10
value: 56.286583333333326
- type: recall_at_100
value: 79.09033333333333
- type: recall_at_1000
value: 93.27483333333335
- type: recall_at_3
value: 41.35325
- type: recall_at_5
value: 48.072750000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.825
- type: map_at_10
value: 33.723
- type: map_at_100
value: 34.74
- type: map_at_1000
value: 34.824
- type: map_at_3
value: 31.369000000000003
- type: map_at_5
value: 32.533
- type: mrr_at_1
value: 29.293999999999997
- type: mrr_at_10
value: 36.84
- type: mrr_at_100
value: 37.681
- type: mrr_at_1000
value: 37.742
- type: mrr_at_3
value: 34.79
- type: mrr_at_5
value: 35.872
- type: ndcg_at_1
value: 29.293999999999997
- type: ndcg_at_10
value: 38.385999999999996
- type: ndcg_at_100
value: 43.327
- type: ndcg_at_1000
value: 45.53
- type: ndcg_at_3
value: 33.985
- type: ndcg_at_5
value: 35.817
- type: precision_at_1
value: 29.293999999999997
- type: precision_at_10
value: 6.12
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 14.621999999999998
- type: precision_at_5
value: 10.030999999999999
- type: recall_at_1
value: 25.825
- type: recall_at_10
value: 49.647000000000006
- type: recall_at_100
value: 72.32300000000001
- type: recall_at_1000
value: 88.62400000000001
- type: recall_at_3
value: 37.366
- type: recall_at_5
value: 41.957
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.139
- type: map_at_10
value: 26.107000000000003
- type: map_at_100
value: 27.406999999999996
- type: map_at_1000
value: 27.535999999999998
- type: map_at_3
value: 23.445
- type: map_at_5
value: 24.916
- type: mrr_at_1
value: 21.817
- type: mrr_at_10
value: 29.99
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.128
- type: mrr_at_3
value: 27.627000000000002
- type: mrr_at_5
value: 29.005
- type: ndcg_at_1
value: 21.817
- type: ndcg_at_10
value: 31.135
- type: ndcg_at_100
value: 37.108000000000004
- type: ndcg_at_1000
value: 39.965
- type: ndcg_at_3
value: 26.439
- type: ndcg_at_5
value: 28.655
- type: precision_at_1
value: 21.817
- type: precision_at_10
value: 5.757000000000001
- type: precision_at_100
value: 1.036
- type: precision_at_1000
value: 0.147
- type: precision_at_3
value: 12.537
- type: precision_at_5
value: 9.229
- type: recall_at_1
value: 18.139
- type: recall_at_10
value: 42.272999999999996
- type: recall_at_100
value: 68.657
- type: recall_at_1000
value: 88.93799999999999
- type: recall_at_3
value: 29.266
- type: recall_at_5
value: 34.892
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.755000000000003
- type: map_at_10
value: 37.384
- type: map_at_100
value: 38.56
- type: map_at_1000
value: 38.655
- type: map_at_3
value: 34.214
- type: map_at_5
value: 35.96
- type: mrr_at_1
value: 32.369
- type: mrr_at_10
value: 41.625
- type: mrr_at_100
value: 42.449
- type: mrr_at_1000
value: 42.502
- type: mrr_at_3
value: 38.899
- type: mrr_at_5
value: 40.489999999999995
- type: ndcg_at_1
value: 32.369
- type: ndcg_at_10
value: 43.287
- type: ndcg_at_100
value: 48.504999999999995
- type: ndcg_at_1000
value: 50.552
- type: ndcg_at_3
value: 37.549
- type: ndcg_at_5
value: 40.204
- type: precision_at_1
value: 32.369
- type: precision_at_10
value: 7.425
- type: precision_at_100
value: 1.134
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 17.102
- type: precision_at_5
value: 12.107999999999999
- type: recall_at_1
value: 27.755000000000003
- type: recall_at_10
value: 57.071000000000005
- type: recall_at_100
value: 79.456
- type: recall_at_1000
value: 93.54299999999999
- type: recall_at_3
value: 41.298
- type: recall_at_5
value: 48.037
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.855
- type: map_at_10
value: 34.53
- type: map_at_100
value: 36.167
- type: map_at_1000
value: 36.394999999999996
- type: map_at_3
value: 31.037
- type: map_at_5
value: 33.119
- type: mrr_at_1
value: 30.631999999999998
- type: mrr_at_10
value: 39.763999999999996
- type: mrr_at_100
value: 40.77
- type: mrr_at_1000
value: 40.826
- type: mrr_at_3
value: 36.495
- type: mrr_at_5
value: 38.561
- type: ndcg_at_1
value: 30.631999999999998
- type: ndcg_at_10
value: 40.942
- type: ndcg_at_100
value: 47.07
- type: ndcg_at_1000
value: 49.363
- type: ndcg_at_3
value: 35.038000000000004
- type: ndcg_at_5
value: 38.161
- type: precision_at_1
value: 30.631999999999998
- type: precision_at_10
value: 7.983999999999999
- type: precision_at_100
value: 1.6070000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 24.855
- type: recall_at_10
value: 53.291999999999994
- type: recall_at_100
value: 80.283
- type: recall_at_1000
value: 94.309
- type: recall_at_3
value: 37.257
- type: recall_at_5
value: 45.282
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.208
- type: map_at_10
value: 30.512
- type: map_at_100
value: 31.496000000000002
- type: map_at_1000
value: 31.595000000000002
- type: map_at_3
value: 27.904
- type: map_at_5
value: 29.491
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 32.379999999999995
- type: mrr_at_100
value: 33.245000000000005
- type: mrr_at_1000
value: 33.315
- type: mrr_at_3
value: 29.945
- type: mrr_at_5
value: 31.488
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 35.643
- type: ndcg_at_100
value: 40.535
- type: ndcg_at_1000
value: 43.042
- type: ndcg_at_3
value: 30.625000000000004
- type: ndcg_at_5
value: 33.323
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 13.431999999999999
- type: precision_at_5
value: 9.575
- type: recall_at_1
value: 21.208
- type: recall_at_10
value: 49.47
- type: recall_at_100
value: 71.71499999999999
- type: recall_at_1000
value: 90.55499999999999
- type: recall_at_3
value: 36.124
- type: recall_at_5
value: 42.606
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.363
- type: map_at_10
value: 20.312
- type: map_at_100
value: 22.225
- type: map_at_1000
value: 22.411
- type: map_at_3
value: 16.68
- type: map_at_5
value: 18.608
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 37.933
- type: mrr_at_100
value: 38.875
- type: mrr_at_1000
value: 38.911
- type: mrr_at_3
value: 34.387
- type: mrr_at_5
value: 36.51
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 28.82
- type: ndcg_at_100
value: 36.341
- type: ndcg_at_1000
value: 39.615
- type: ndcg_at_3
value: 23.01
- type: ndcg_at_5
value: 25.269000000000002
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 9.153
- type: precision_at_100
value: 1.7319999999999998
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 17.22
- type: precision_at_5
value: 13.629
- type: recall_at_1
value: 11.363
- type: recall_at_10
value: 35.382999999999996
- type: recall_at_100
value: 61.367000000000004
- type: recall_at_1000
value: 79.699
- type: recall_at_3
value: 21.495
- type: recall_at_5
value: 27.42
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.65
- type: map_at_10
value: 20.742
- type: map_at_100
value: 29.614
- type: map_at_1000
value: 31.373
- type: map_at_3
value: 14.667
- type: map_at_5
value: 17.186
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 76.762
- type: mrr_at_100
value: 77.171
- type: mrr_at_1000
value: 77.179
- type: mrr_at_3
value: 75.125
- type: mrr_at_5
value: 76.287
- type: ndcg_at_1
value: 57.62500000000001
- type: ndcg_at_10
value: 42.370999999999995
- type: ndcg_at_100
value: 47.897
- type: ndcg_at_1000
value: 55.393
- type: ndcg_at_3
value: 46.317
- type: ndcg_at_5
value: 43.906
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 33.95
- type: precision_at_100
value: 10.885
- type: precision_at_1000
value: 2.2239999999999998
- type: precision_at_3
value: 49.75
- type: precision_at_5
value: 42.3
- type: recall_at_1
value: 9.65
- type: recall_at_10
value: 26.117
- type: recall_at_100
value: 55.084
- type: recall_at_1000
value: 78.62400000000001
- type: recall_at_3
value: 15.823
- type: recall_at_5
value: 19.652
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.885
- type: f1
value: 42.99567641346983
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.97
- type: map_at_10
value: 80.34599999999999
- type: map_at_100
value: 80.571
- type: map_at_1000
value: 80.584
- type: map_at_3
value: 79.279
- type: map_at_5
value: 79.94
- type: mrr_at_1
value: 76.613
- type: mrr_at_10
value: 85.15700000000001
- type: mrr_at_100
value: 85.249
- type: mrr_at_1000
value: 85.252
- type: mrr_at_3
value: 84.33800000000001
- type: mrr_at_5
value: 84.89
- type: ndcg_at_1
value: 76.613
- type: ndcg_at_10
value: 84.53399999999999
- type: ndcg_at_100
value: 85.359
- type: ndcg_at_1000
value: 85.607
- type: ndcg_at_3
value: 82.76599999999999
- type: ndcg_at_5
value: 83.736
- type: precision_at_1
value: 76.613
- type: precision_at_10
value: 10.206
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 31.913000000000004
- type: precision_at_5
value: 19.769000000000002
- type: recall_at_1
value: 70.97
- type: recall_at_10
value: 92.674
- type: recall_at_100
value: 95.985
- type: recall_at_1000
value: 97.57000000000001
- type: recall_at_3
value: 87.742
- type: recall_at_5
value: 90.28
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.494
- type: map_at_10
value: 36.491
- type: map_at_100
value: 38.550000000000004
- type: map_at_1000
value: 38.726
- type: map_at_3
value: 31.807000000000002
- type: map_at_5
value: 34.299
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.146
- type: mrr_at_100
value: 54.013999999999996
- type: mrr_at_1000
value: 54.044000000000004
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.124
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 44.499
- type: ndcg_at_100
value: 51.629000000000005
- type: ndcg_at_1000
value: 54.367
- type: ndcg_at_3
value: 40.900999999999996
- type: ndcg_at_5
value: 41.737
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.346
- type: precision_at_100
value: 1.974
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 27.366
- type: precision_at_5
value: 19.846
- type: recall_at_1
value: 22.494
- type: recall_at_10
value: 51.156
- type: recall_at_100
value: 77.11200000000001
- type: recall_at_1000
value: 93.44
- type: recall_at_3
value: 36.574
- type: recall_at_5
value: 42.361
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.568999999999996
- type: map_at_10
value: 58.485
- type: map_at_100
value: 59.358999999999995
- type: map_at_1000
value: 59.429
- type: map_at_3
value: 55.217000000000006
- type: map_at_5
value: 57.236
- type: mrr_at_1
value: 77.137
- type: mrr_at_10
value: 82.829
- type: mrr_at_100
value: 83.04599999999999
- type: mrr_at_1000
value: 83.05399999999999
- type: mrr_at_3
value: 81.904
- type: mrr_at_5
value: 82.50800000000001
- type: ndcg_at_1
value: 77.137
- type: ndcg_at_10
value: 67.156
- type: ndcg_at_100
value: 70.298
- type: ndcg_at_1000
value: 71.65700000000001
- type: ndcg_at_3
value: 62.535
- type: ndcg_at_5
value: 65.095
- type: precision_at_1
value: 77.137
- type: precision_at_10
value: 13.911999999999999
- type: precision_at_100
value: 1.6389999999999998
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.572
- type: precision_at_5
value: 25.766
- type: recall_at_1
value: 38.568999999999996
- type: recall_at_10
value: 69.56099999999999
- type: recall_at_100
value: 81.931
- type: recall_at_1000
value: 90.91799999999999
- type: recall_at_3
value: 59.358999999999995
- type: recall_at_5
value: 64.416
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.45600000000002
- type: ap
value: 84.09725115338568
- type: f1
value: 88.41874909080512
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.404999999999998
- type: map_at_10
value: 33.921
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.164
- type: map_at_3
value: 30.043999999999997
- type: map_at_5
value: 32.327
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 34.505
- type: mrr_at_100
value: 35.638999999999996
- type: mrr_at_1000
value: 35.68
- type: mrr_at_3
value: 30.703999999999997
- type: mrr_at_5
value: 32.96
- type: ndcg_at_1
value: 21.963
- type: ndcg_at_10
value: 40.859
- type: ndcg_at_100
value: 46.614
- type: ndcg_at_1000
value: 47.789
- type: ndcg_at_3
value: 33.007999999999996
- type: ndcg_at_5
value: 37.084
- type: precision_at_1
value: 21.963
- type: precision_at_10
value: 6.493
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.155000000000001
- type: precision_at_5
value: 10.544
- type: recall_at_1
value: 21.404999999999998
- type: recall_at_10
value: 62.175000000000004
- type: recall_at_100
value: 88.786
- type: recall_at_1000
value: 97.738
- type: recall_at_3
value: 40.925
- type: recall_at_5
value: 50.722
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.50661194710442
- type: f1
value: 93.30311193153668
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.24669402644778
- type: f1
value: 54.23122108002977
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.61936785474109
- type: f1
value: 70.52644941025565
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.76529926025555
- type: f1
value: 77.26872729322514
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.39450293021839
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.757796879839294
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.62512146657428
- type: mrr
value: 33.84624322066173
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.462
- type: map_at_10
value: 14.947
- type: map_at_100
value: 19.344
- type: map_at_1000
value: 20.933
- type: map_at_3
value: 10.761999999999999
- type: map_at_5
value: 12.744
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.365
- type: mrr_at_100
value: 57.931
- type: mrr_at_1000
value: 57.96
- type: mrr_at_3
value: 54.85
- type: mrr_at_5
value: 56.569
- type: ndcg_at_1
value: 46.129999999999995
- type: ndcg_at_10
value: 38.173
- type: ndcg_at_100
value: 35.983
- type: ndcg_at_1000
value: 44.507000000000005
- type: ndcg_at_3
value: 42.495
- type: ndcg_at_5
value: 41.019
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 28.731
- type: precision_at_100
value: 9.232
- type: precision_at_1000
value: 2.202
- type: precision_at_3
value: 39.628
- type: precision_at_5
value: 35.851
- type: recall_at_1
value: 6.462
- type: recall_at_10
value: 18.968
- type: recall_at_100
value: 37.131
- type: recall_at_1000
value: 67.956
- type: recall_at_3
value: 11.905000000000001
- type: recall_at_5
value: 15.097
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.335
- type: map_at_10
value: 46.611999999999995
- type: map_at_100
value: 47.632000000000005
- type: map_at_1000
value: 47.661
- type: map_at_3
value: 41.876999999999995
- type: map_at_5
value: 44.799
- type: mrr_at_1
value: 34.125
- type: mrr_at_10
value: 49.01
- type: mrr_at_100
value: 49.75
- type: mrr_at_1000
value: 49.768
- type: mrr_at_3
value: 45.153
- type: mrr_at_5
value: 47.589999999999996
- type: ndcg_at_1
value: 34.125
- type: ndcg_at_10
value: 54.777
- type: ndcg_at_100
value: 58.914
- type: ndcg_at_1000
value: 59.521
- type: ndcg_at_3
value: 46.015
- type: ndcg_at_5
value: 50.861000000000004
- type: precision_at_1
value: 34.125
- type: precision_at_10
value: 9.166
- type: precision_at_100
value: 1.149
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 21.147
- type: precision_at_5
value: 15.469
- type: recall_at_1
value: 30.335
- type: recall_at_10
value: 77.194
- type: recall_at_100
value: 94.812
- type: recall_at_1000
value: 99.247
- type: recall_at_3
value: 54.681000000000004
- type: recall_at_5
value: 65.86800000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.62
- type: map_at_10
value: 84.536
- type: map_at_100
value: 85.167
- type: map_at_1000
value: 85.184
- type: map_at_3
value: 81.607
- type: map_at_5
value: 83.423
- type: mrr_at_1
value: 81.36
- type: mrr_at_10
value: 87.506
- type: mrr_at_100
value: 87.601
- type: mrr_at_1000
value: 87.601
- type: mrr_at_3
value: 86.503
- type: mrr_at_5
value: 87.179
- type: ndcg_at_1
value: 81.36
- type: ndcg_at_10
value: 88.319
- type: ndcg_at_100
value: 89.517
- type: ndcg_at_1000
value: 89.60900000000001
- type: ndcg_at_3
value: 85.423
- type: ndcg_at_5
value: 86.976
- type: precision_at_1
value: 81.36
- type: precision_at_10
value: 13.415
- type: precision_at_100
value: 1.529
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.342999999999996
- type: precision_at_5
value: 24.534
- type: recall_at_1
value: 70.62
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 99.624
- type: recall_at_1000
value: 99.991
- type: recall_at_3
value: 87.22
- type: recall_at_5
value: 91.654
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.826438478212744
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.24027467551447
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.997999999999999
- type: map_at_10
value: 14.267
- type: map_at_100
value: 16.843
- type: map_at_1000
value: 17.229
- type: map_at_3
value: 9.834
- type: map_at_5
value: 11.92
- type: mrr_at_1
value: 24.7
- type: mrr_at_10
value: 37.685
- type: mrr_at_100
value: 38.704
- type: mrr_at_1000
value: 38.747
- type: mrr_at_3
value: 34.150000000000006
- type: mrr_at_5
value: 36.075
- type: ndcg_at_1
value: 24.7
- type: ndcg_at_10
value: 23.44
- type: ndcg_at_100
value: 32.617000000000004
- type: ndcg_at_1000
value: 38.628
- type: ndcg_at_3
value: 21.747
- type: ndcg_at_5
value: 19.076
- type: precision_at_1
value: 24.7
- type: precision_at_10
value: 12.47
- type: precision_at_100
value: 2.564
- type: precision_at_1000
value: 0.4
- type: precision_at_3
value: 20.767
- type: precision_at_5
value: 17.06
- type: recall_at_1
value: 4.997999999999999
- type: recall_at_10
value: 25.3
- type: recall_at_100
value: 52.048
- type: recall_at_1000
value: 81.093
- type: recall_at_3
value: 12.642999999999999
- type: recall_at_5
value: 17.312
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.44942006292234
- type: cos_sim_spearman
value: 79.80930790660699
- type: euclidean_pearson
value: 82.93400777494863
- type: euclidean_spearman
value: 80.04664991110705
- type: manhattan_pearson
value: 82.93551681854949
- type: manhattan_spearman
value: 80.03156736837379
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.63574059135726
- type: cos_sim_spearman
value: 76.80552915288186
- type: euclidean_pearson
value: 82.46368529820518
- type: euclidean_spearman
value: 76.60338474719275
- type: manhattan_pearson
value: 82.4558617035968
- type: manhattan_spearman
value: 76.57936082895705
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.24116811084211
- type: cos_sim_spearman
value: 88.10998662068769
- type: euclidean_pearson
value: 87.04961732352689
- type: euclidean_spearman
value: 88.12543945864087
- type: manhattan_pearson
value: 86.9905224528854
- type: manhattan_spearman
value: 88.07827944705546
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.74847296555048
- type: cos_sim_spearman
value: 82.66200957916445
- type: euclidean_pearson
value: 84.48132256004965
- type: euclidean_spearman
value: 82.67915286000596
- type: manhattan_pearson
value: 84.44950477268334
- type: manhattan_spearman
value: 82.63327639173352
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.23056258027053
- type: cos_sim_spearman
value: 88.92791680286955
- type: euclidean_pearson
value: 88.13819235461933
- type: euclidean_spearman
value: 88.87294661361716
- type: manhattan_pearson
value: 88.14212133687899
- type: manhattan_spearman
value: 88.88551854529777
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.64179522732887
- type: cos_sim_spearman
value: 84.25028809903114
- type: euclidean_pearson
value: 83.40175015236979
- type: euclidean_spearman
value: 84.23369296429406
- type: manhattan_pearson
value: 83.43768174261321
- type: manhattan_spearman
value: 84.27855229214734
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.20378955494732
- type: cos_sim_spearman
value: 88.46863559173111
- type: euclidean_pearson
value: 88.8249295811663
- type: euclidean_spearman
value: 88.6312737724905
- type: manhattan_pearson
value: 88.87744466378827
- type: manhattan_spearman
value: 88.82908423767314
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.91342028796086
- type: cos_sim_spearman
value: 69.71495021867864
- type: euclidean_pearson
value: 70.65334330405646
- type: euclidean_spearman
value: 69.4321253472211
- type: manhattan_pearson
value: 70.59743494727465
- type: manhattan_spearman
value: 69.11695509297482
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.42451709766952
- type: cos_sim_spearman
value: 86.07166710670508
- type: euclidean_pearson
value: 86.12711421258899
- type: euclidean_spearman
value: 86.05232086925126
- type: manhattan_pearson
value: 86.15591089932126
- type: manhattan_spearman
value: 86.0890128623439
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.1976344717285
- type: mrr
value: 96.3703145075694
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.511
- type: map_at_10
value: 69.724
- type: map_at_100
value: 70.208
- type: map_at_1000
value: 70.22800000000001
- type: map_at_3
value: 66.986
- type: map_at_5
value: 68.529
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.55
- type: mrr_at_100
value: 70.985
- type: mrr_at_1000
value: 71.004
- type: mrr_at_3
value: 68.611
- type: mrr_at_5
value: 69.728
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 74.265
- type: ndcg_at_100
value: 76.361
- type: ndcg_at_1000
value: 76.82900000000001
- type: ndcg_at_3
value: 69.772
- type: ndcg_at_5
value: 71.94800000000001
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.444000000000003
- type: precision_at_5
value: 18
- type: recall_at_1
value: 59.511
- type: recall_at_10
value: 87.156
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 75.2
- type: recall_at_5
value: 80.661
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81683168316832
- type: cos_sim_ap
value: 95.74716566563774
- type: cos_sim_f1
value: 90.64238745574103
- type: cos_sim_precision
value: 91.7093142272262
- type: cos_sim_recall
value: 89.60000000000001
- type: dot_accuracy
value: 99.69405940594059
- type: dot_ap
value: 91.09013507754594
- type: dot_f1
value: 84.54227113556779
- type: dot_precision
value: 84.58458458458459
- type: dot_recall
value: 84.5
- type: euclidean_accuracy
value: 99.81782178217821
- type: euclidean_ap
value: 95.6324301072609
- type: euclidean_f1
value: 90.58341862845445
- type: euclidean_precision
value: 92.76729559748428
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.81980198019802
- type: manhattan_ap
value: 95.68510494437183
- type: manhattan_f1
value: 90.58945191313342
- type: manhattan_precision
value: 93.79014989293361
- type: manhattan_recall
value: 87.6
- type: max_accuracy
value: 99.81980198019802
- type: max_ap
value: 95.74716566563774
- type: max_f1
value: 90.64238745574103
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.63761899427078
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.572473369697235
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.63000245208579
- type: mrr
value: 54.504193722943725
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.300791939416545
- type: cos_sim_spearman
value: 31.662904057924123
- type: dot_pearson
value: 26.21198530758316
- type: dot_spearman
value: 27.006921548904263
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.197
- type: map_at_10
value: 1.752
- type: map_at_100
value: 10.795
- type: map_at_1000
value: 27.18
- type: map_at_3
value: 0.5890000000000001
- type: map_at_5
value: 0.938
- type: mrr_at_1
value: 74
- type: mrr_at_10
value: 85.833
- type: mrr_at_100
value: 85.833
- type: mrr_at_1000
value: 85.833
- type: mrr_at_3
value: 85.333
- type: mrr_at_5
value: 85.833
- type: ndcg_at_1
value: 69
- type: ndcg_at_10
value: 70.22
- type: ndcg_at_100
value: 55.785
- type: ndcg_at_1000
value: 52.93600000000001
- type: ndcg_at_3
value: 72.084
- type: ndcg_at_5
value: 71.184
- type: precision_at_1
value: 74
- type: precision_at_10
value: 75.2
- type: precision_at_100
value: 57.3
- type: precision_at_1000
value: 23.302
- type: precision_at_3
value: 77.333
- type: precision_at_5
value: 75.6
- type: recall_at_1
value: 0.197
- type: recall_at_10
value: 2.019
- type: recall_at_100
value: 14.257
- type: recall_at_1000
value: 50.922
- type: recall_at_3
value: 0.642
- type: recall_at_5
value: 1.043
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.803
- type: map_at_10
value: 10.407
- type: map_at_100
value: 16.948
- type: map_at_1000
value: 18.424
- type: map_at_3
value: 5.405
- type: map_at_5
value: 6.908
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 50.221000000000004
- type: mrr_at_100
value: 51.388
- type: mrr_at_1000
value: 51.402
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.626
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 25.507
- type: ndcg_at_100
value: 38.296
- type: ndcg_at_1000
value: 49.492000000000004
- type: ndcg_at_3
value: 29.006999999999998
- type: ndcg_at_5
value: 25.979000000000003
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 22.041
- type: precision_at_100
value: 8.02
- type: precision_at_1000
value: 1.567
- type: precision_at_3
value: 28.571
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.803
- type: recall_at_10
value: 16.378
- type: recall_at_100
value: 50.489
- type: recall_at_1000
value: 85.013
- type: recall_at_3
value: 6.505
- type: recall_at_5
value: 9.243
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.55579999999999
- type: ap
value: 14.206982753316227
- type: f1
value: 54.372142814964285
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.57611771363893
- type: f1
value: 56.924172639063144
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 52.82304915719759
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.92716218632653
- type: cos_sim_ap
value: 73.73359122546046
- type: cos_sim_f1
value: 68.42559487116262
- type: cos_sim_precision
value: 64.22124508215691
- type: cos_sim_recall
value: 73.21899736147758
- type: dot_accuracy
value: 80.38981939560112
- type: dot_ap
value: 54.61060862444974
- type: dot_f1
value: 53.45710627400769
- type: dot_precision
value: 44.87638839125761
- type: dot_recall
value: 66.09498680738787
- type: euclidean_accuracy
value: 86.02849138701794
- type: euclidean_ap
value: 73.95673761922404
- type: euclidean_f1
value: 68.6783042394015
- type: euclidean_precision
value: 65.1063829787234
- type: euclidean_recall
value: 72.66490765171504
- type: manhattan_accuracy
value: 85.9808070572808
- type: manhattan_ap
value: 73.9050720058029
- type: manhattan_f1
value: 68.57560618983794
- type: manhattan_precision
value: 63.70839936608558
- type: manhattan_recall
value: 74.24802110817942
- type: max_accuracy
value: 86.02849138701794
- type: max_ap
value: 73.95673761922404
- type: max_f1
value: 68.6783042394015
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.72783017037295
- type: cos_sim_ap
value: 85.52705223340233
- type: cos_sim_f1
value: 77.91659078492079
- type: cos_sim_precision
value: 73.93378032764221
- type: cos_sim_recall
value: 82.35294117647058
- type: dot_accuracy
value: 85.41739434159972
- type: dot_ap
value: 77.17734818118443
- type: dot_f1
value: 71.63473589973144
- type: dot_precision
value: 66.96123719622415
- type: dot_recall
value: 77.00954727440714
- type: euclidean_accuracy
value: 88.68125897465751
- type: euclidean_ap
value: 85.47712213906692
- type: euclidean_f1
value: 77.81419950830664
- type: euclidean_precision
value: 75.37162649733006
- type: euclidean_recall
value: 80.42038805050817
- type: manhattan_accuracy
value: 88.67349710870494
- type: manhattan_ap
value: 85.46506475241955
- type: manhattan_f1
value: 77.87259084890393
- type: manhattan_precision
value: 74.54929577464789
- type: manhattan_recall
value: 81.50600554357868
- type: max_accuracy
value: 88.72783017037295
- type: max_ap
value: 85.52705223340233
- type: max_f1
value: 77.91659078492079
language:
- en
license: mit
---
# gte-large
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Metrics
We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
| [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
| [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
## Usage
Code example
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-large")
model = AutoModel.from_pretrained("thenlper/gte-large")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('thenlper/gte-large')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
### Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@misc{li2023general,
title={Towards General Text Embeddings with Multi-stage Contrastive Learning},
author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang},
year={2023},
eprint={2308.03281},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
petergriger/ppo-SnowballTarget
|
petergriger
| 2023-10-09T14:07:35Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-10-09T14:07:33Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: petergriger/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
muffinnxz/sdxl-lora-index-modern-1
|
muffinnxz
| 2023-10-09T14:05:29Z | 7 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-09T13:23:19Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of index modern style room
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - muffinnxz/sdxl-lora-index-modern-1
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of index modern style room using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned-kin-sent3
|
RogerB
| 2023-10-09T13:48:43Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned",
"base_model:finetune:RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T13:42:45Z |
---
base_model: RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned-kin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned-kin-sent3
This model is a fine-tuned version of [RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned](https://huggingface.co/RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8680
- F1: 0.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8897 | 1.0 | 1013 | 0.6684 | 0.7214 |
| 0.6912 | 2.0 | 2026 | 0.5492 | 0.7809 |
| 0.5979 | 3.0 | 3039 | 0.5216 | 0.7990 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
sankar82/ppo-Huggy
|
sankar82
| 2023-10-09T13:48:12Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-09T13:48:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sankar82/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MorganDustyProd/FixPlay
|
MorganDustyProd
| 2023-10-09T13:47:41Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-09T13:47:26Z |
---
license: cc-by-nc-sa-4.0
---
|
MorganDustyProd/VictorTsoi
|
MorganDustyProd
| 2023-10-09T13:47:01Z | 0 | 1 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-09T13:46:29Z |
---
license: cc-by-nc-sa-4.0
---
|
MorganDustyProd/UNI
|
MorganDustyProd
| 2023-10-09T13:45:48Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-09T13:44:49Z |
---
license: cc-by-nc-sa-4.0
---
|
vectoriseai/bge-base-en-v1.5
|
vectoriseai
| 2023-10-09T13:44:30Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-10-09T13:33:58Z |
---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38.0
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89.0
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 83.0
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
license: mit
language:
- en
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
- 09/12/2023: New Release:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval\* |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
\*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
\**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details pelease refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
## Citation
If you find our work helpful, please cite us:
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
MorganDustyProd/Poligraff
|
MorganDustyProd
| 2023-10-09T13:42:45Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-09T13:42:21Z |
---
license: cc-by-nc-sa-4.0
---
|
lrei/xlm-roberta-base-emolit-multilingual
|
lrei
| 2023-10-09T13:40:11Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"nl",
"fr",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T13:28:10Z |
---
license: mit
language:
- en
- nl
- fr
- it
---
Similar to https://huggingface.co/lrei/roberta-base-emolit
But trained on https://zenodo.org/record/8420877
(hard labels) which includes the original English text plus automatically translated text: Dutch, French, Italian.
Code used available at: https://github.com/lrei/emolit_train
### Reported Results (macro-averaged F1)
| Language | F1 |
| -------- | ---- |
| All | 0.55 |
| Dutch | 0.53 |
| English | 0.57 |
| French: | 0.54 |
| Italian | 0.54 |
### Cite
Based on
```
@Article{app13137502,
AUTHOR = {Rei, Luis and Mladenić, Dunja},
TITLE = {Detecting Fine-Grained Emotions in Literature},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {13},
ARTICLE-NUMBER = {7502},
URL = {https://www.mdpi.com/2076-3417/13/13/7502},
ISSN = {2076-3417},
DOI = {10.3390/app13137502}
}
```
|
MorganDustyProd/MilanaKhametova
|
MorganDustyProd
| 2023-10-09T13:39:55Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-09T13:39:28Z |
---
license: cc-by-nc-sa-4.0
---
|
MorganDustyProd/IosifStalin
|
MorganDustyProd
| 2023-10-09T13:35:37Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-09T13:35:11Z |
---
license: cc-by-nc-sa-4.0
---
|
newsmediabias/UnBIAS-LLama2-Debiaser-Chat-QLoRA
|
newsmediabias
| 2023-10-09T13:30:23Z | 173 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:newsmediabias/debiased_dataset",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-08T19:15:34Z |
---
license: openrail
datasets:
- newsmediabias/debiased_dataset
language:
- en
---
```
from transformers import AutoTokenizer
import transformers
import torch
model = "newsmediabias/UnBIAS-LLama2-Debiaser-Chat-QLoRA"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sys_message = "Task:""
prompt=""
intput_text=""
sequences = pipeline(
intput_text,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=len(prompt)+100,
)
res=sequences[0]['generated_text']
```
|
RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned-kin-sent1
|
RogerB
| 2023-10-09T13:26:20Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned",
"base_model:finetune:RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T13:19:50Z |
---
base_model: RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned-kin-sent1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned-kin-sent1
This model is a fine-tuned version of [RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned](https://huggingface.co/RogerB/kinyaRoberta-large-kinte-finetuned-kin-tweet-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8724
- F1: 0.6637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8834 | 1.0 | 1013 | 0.6818 | 0.7192 |
| 0.6926 | 2.0 | 2026 | 0.5393 | 0.7854 |
| 0.5912 | 3.0 | 3039 | 0.5101 | 0.8079 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
vectoriseai/bge-large-en-v1.5
|
vectoriseai
| 2023-10-09T13:20:23Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-10-09T12:27:18Z |
---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-large-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8507462686567
- type: ap
value: 38.566457320228245
- type: f1
value: 69.69386648043475
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.416675
- type: ap
value: 89.1928861155922
- type: f1
value: 92.39477019574215
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.175999999999995
- type: f1
value: 47.80712792870253
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.184999999999995
- type: map_at_10
value: 55.654
- type: map_at_100
value: 56.25
- type: map_at_1000
value: 56.255
- type: map_at_3
value: 51.742999999999995
- type: map_at_5
value: 54.129000000000005
- type: mrr_at_1
value: 40.967
- type: mrr_at_10
value: 55.96
- type: mrr_at_100
value: 56.54900000000001
- type: mrr_at_1000
value: 56.554
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.44
- type: ndcg_at_1
value: 40.184999999999995
- type: ndcg_at_10
value: 63.542
- type: ndcg_at_100
value: 65.96499999999999
- type: ndcg_at_1000
value: 66.08699999999999
- type: ndcg_at_3
value: 55.582
- type: ndcg_at_5
value: 59.855000000000004
- type: precision_at_1
value: 40.184999999999995
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.405
- type: recall_at_1
value: 40.184999999999995
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 98.72
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 77.027
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.567077926750066
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.19453389182364
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.46555939623092
- type: mrr
value: 77.82361605768807
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.9554128814735
- type: cos_sim_spearman
value: 84.65373612172036
- type: euclidean_pearson
value: 83.2905059954138
- type: euclidean_spearman
value: 84.52240782811128
- type: manhattan_pearson
value: 82.99533802997436
- type: manhattan_spearman
value: 84.20673798475734
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.78896103896103
- type: f1
value: 87.77189310964883
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.714538337650495
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.90108349284447
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.795
- type: map_at_10
value: 43.669000000000004
- type: map_at_100
value: 45.151
- type: map_at_1000
value: 45.278
- type: map_at_3
value: 40.006
- type: map_at_5
value: 42.059999999999995
- type: mrr_at_1
value: 39.771
- type: mrr_at_10
value: 49.826
- type: mrr_at_100
value: 50.504000000000005
- type: mrr_at_1000
value: 50.549
- type: mrr_at_3
value: 47.115
- type: mrr_at_5
value: 48.832
- type: ndcg_at_1
value: 39.771
- type: ndcg_at_10
value: 50.217999999999996
- type: ndcg_at_100
value: 55.454
- type: ndcg_at_1000
value: 57.37
- type: ndcg_at_3
value: 44.885000000000005
- type: ndcg_at_5
value: 47.419
- type: precision_at_1
value: 39.771
- type: precision_at_10
value: 9.642000000000001
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 15.536
- type: recall_at_1
value: 32.795
- type: recall_at_10
value: 62.580999999999996
- type: recall_at_100
value: 84.438
- type: recall_at_1000
value: 96.492
- type: recall_at_3
value: 47.071000000000005
- type: recall_at_5
value: 54.079
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.671
- type: map_at_10
value: 43.334
- type: map_at_100
value: 44.566
- type: map_at_1000
value: 44.702999999999996
- type: map_at_3
value: 40.343
- type: map_at_5
value: 41.983
- type: mrr_at_1
value: 40.764
- type: mrr_at_10
value: 49.382
- type: mrr_at_100
value: 49.988
- type: mrr_at_1000
value: 50.03300000000001
- type: mrr_at_3
value: 47.293
- type: mrr_at_5
value: 48.51
- type: ndcg_at_1
value: 40.764
- type: ndcg_at_10
value: 49.039
- type: ndcg_at_100
value: 53.259
- type: ndcg_at_1000
value: 55.253
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 46.839999999999996
- type: precision_at_1
value: 40.764
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 21.72
- type: precision_at_5
value: 15.299
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 58.816
- type: recall_at_100
value: 76.654
- type: recall_at_1000
value: 89.05999999999999
- type: recall_at_3
value: 46.743
- type: recall_at_5
value: 51.783
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.328
- type: map_at_10
value: 53.32599999999999
- type: map_at_100
value: 54.37499999999999
- type: map_at_1000
value: 54.429
- type: map_at_3
value: 49.902
- type: map_at_5
value: 52.002
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 56.858
- type: mrr_at_100
value: 57.522
- type: mrr_at_1000
value: 57.54899999999999
- type: mrr_at_3
value: 54.472
- type: mrr_at_5
value: 55.996
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 59.313
- type: ndcg_at_100
value: 63.266999999999996
- type: ndcg_at_1000
value: 64.36
- type: ndcg_at_3
value: 53.815000000000005
- type: ndcg_at_5
value: 56.814
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.53
- type: precision_at_100
value: 1.238
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.054000000000002
- type: precision_at_5
value: 16.589000000000002
- type: recall_at_1
value: 40.328
- type: recall_at_10
value: 73.421
- type: recall_at_100
value: 90.059
- type: recall_at_1000
value: 97.81
- type: recall_at_3
value: 59.009
- type: recall_at_5
value: 66.352
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.424
- type: map_at_10
value: 36.332
- type: map_at_100
value: 37.347
- type: map_at_1000
value: 37.422
- type: map_at_3
value: 33.743
- type: map_at_5
value: 35.176
- type: mrr_at_1
value: 29.153000000000002
- type: mrr_at_10
value: 38.233
- type: mrr_at_100
value: 39.109
- type: mrr_at_1000
value: 39.164
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.169000000000004
- type: ndcg_at_1
value: 29.153000000000002
- type: ndcg_at_10
value: 41.439
- type: ndcg_at_100
value: 46.42
- type: ndcg_at_1000
value: 48.242000000000004
- type: ndcg_at_3
value: 36.362
- type: ndcg_at_5
value: 38.743
- type: precision_at_1
value: 29.153000000000002
- type: precision_at_10
value: 6.315999999999999
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 15.443000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.424
- type: recall_at_10
value: 55.364000000000004
- type: recall_at_100
value: 78.211
- type: recall_at_1000
value: 91.74600000000001
- type: recall_at_3
value: 41.379
- type: recall_at_5
value: 47.14
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.601
- type: map_at_10
value: 27.826
- type: map_at_100
value: 29.017
- type: map_at_1000
value: 29.137
- type: map_at_3
value: 25.125999999999998
- type: map_at_5
value: 26.765
- type: mrr_at_1
value: 24.005000000000003
- type: mrr_at_10
value: 32.716
- type: mrr_at_100
value: 33.631
- type: mrr_at_1000
value: 33.694
- type: mrr_at_3
value: 29.934
- type: mrr_at_5
value: 31.630999999999997
- type: ndcg_at_1
value: 24.005000000000003
- type: ndcg_at_10
value: 33.158
- type: ndcg_at_100
value: 38.739000000000004
- type: ndcg_at_1000
value: 41.495
- type: ndcg_at_3
value: 28.185
- type: ndcg_at_5
value: 30.796
- type: precision_at_1
value: 24.005000000000003
- type: precision_at_10
value: 5.908
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 13.391
- type: precision_at_5
value: 9.876
- type: recall_at_1
value: 19.601
- type: recall_at_10
value: 44.746
- type: recall_at_100
value: 68.82300000000001
- type: recall_at_1000
value: 88.215
- type: recall_at_3
value: 31.239
- type: recall_at_5
value: 37.695
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.130000000000003
- type: map_at_10
value: 40.96
- type: map_at_100
value: 42.282
- type: map_at_1000
value: 42.392
- type: map_at_3
value: 37.889
- type: map_at_5
value: 39.661
- type: mrr_at_1
value: 36.958999999999996
- type: mrr_at_10
value: 46.835
- type: mrr_at_100
value: 47.644
- type: mrr_at_1000
value: 47.688
- type: mrr_at_3
value: 44.562000000000005
- type: mrr_at_5
value: 45.938
- type: ndcg_at_1
value: 36.958999999999996
- type: ndcg_at_10
value: 47.06
- type: ndcg_at_100
value: 52.345
- type: ndcg_at_1000
value: 54.35
- type: ndcg_at_3
value: 42.301
- type: ndcg_at_5
value: 44.635999999999996
- type: precision_at_1
value: 36.958999999999996
- type: precision_at_10
value: 8.479000000000001
- type: precision_at_100
value: 1.284
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 20.244
- type: precision_at_5
value: 14.224999999999998
- type: recall_at_1
value: 30.130000000000003
- type: recall_at_10
value: 59.27
- type: recall_at_100
value: 81.195
- type: recall_at_1000
value: 94.21199999999999
- type: recall_at_3
value: 45.885
- type: recall_at_5
value: 52.016
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.169999999999998
- type: map_at_10
value: 36.451
- type: map_at_100
value: 37.791000000000004
- type: map_at_1000
value: 37.897
- type: map_at_3
value: 33.109
- type: map_at_5
value: 34.937000000000005
- type: mrr_at_1
value: 32.877
- type: mrr_at_10
value: 42.368
- type: mrr_at_100
value: 43.201
- type: mrr_at_1000
value: 43.259
- type: mrr_at_3
value: 39.763999999999996
- type: mrr_at_5
value: 41.260000000000005
- type: ndcg_at_1
value: 32.877
- type: ndcg_at_10
value: 42.659000000000006
- type: ndcg_at_100
value: 48.161
- type: ndcg_at_1000
value: 50.345
- type: ndcg_at_3
value: 37.302
- type: ndcg_at_5
value: 39.722
- type: precision_at_1
value: 32.877
- type: precision_at_10
value: 7.9
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.846
- type: precision_at_5
value: 12.9
- type: recall_at_1
value: 26.169999999999998
- type: recall_at_10
value: 55.35
- type: recall_at_100
value: 78.755
- type: recall_at_1000
value: 93.518
- type: recall_at_3
value: 40.176
- type: recall_at_5
value: 46.589000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.15516666666667
- type: map_at_10
value: 36.65741666666667
- type: map_at_100
value: 37.84991666666666
- type: map_at_1000
value: 37.96316666666667
- type: map_at_3
value: 33.74974999999999
- type: map_at_5
value: 35.3765
- type: mrr_at_1
value: 32.08233333333334
- type: mrr_at_10
value: 41.033833333333334
- type: mrr_at_100
value: 41.84524999999999
- type: mrr_at_1000
value: 41.89983333333333
- type: mrr_at_3
value: 38.62008333333333
- type: mrr_at_5
value: 40.03441666666666
- type: ndcg_at_1
value: 32.08233333333334
- type: ndcg_at_10
value: 42.229
- type: ndcg_at_100
value: 47.26716666666667
- type: ndcg_at_1000
value: 49.43466666666667
- type: ndcg_at_3
value: 37.36408333333333
- type: ndcg_at_5
value: 39.6715
- type: precision_at_1
value: 32.08233333333334
- type: precision_at_10
value: 7.382583333333334
- type: precision_at_100
value: 1.16625
- type: precision_at_1000
value: 0.15408333333333332
- type: precision_at_3
value: 17.218
- type: precision_at_5
value: 12.21875
- type: recall_at_1
value: 27.15516666666667
- type: recall_at_10
value: 54.36683333333333
- type: recall_at_100
value: 76.37183333333333
- type: recall_at_1000
value: 91.26183333333333
- type: recall_at_3
value: 40.769916666666674
- type: recall_at_5
value: 46.702333333333335
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.749
- type: map_at_10
value: 33.001999999999995
- type: map_at_100
value: 33.891
- type: map_at_1000
value: 33.993
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 31.959
- type: mrr_at_1
value: 28.834
- type: mrr_at_10
value: 35.955
- type: mrr_at_100
value: 36.709
- type: mrr_at_1000
value: 36.779
- type: mrr_at_3
value: 33.947
- type: mrr_at_5
value: 35.089
- type: ndcg_at_1
value: 28.834
- type: ndcg_at_10
value: 37.329
- type: ndcg_at_100
value: 41.79
- type: ndcg_at_1000
value: 44.169000000000004
- type: ndcg_at_3
value: 33.184999999999995
- type: ndcg_at_5
value: 35.107
- type: precision_at_1
value: 28.834
- type: precision_at_10
value: 5.7669999999999995
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.213000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 25.749
- type: recall_at_10
value: 47.791
- type: recall_at_100
value: 68.255
- type: recall_at_1000
value: 85.749
- type: recall_at_3
value: 36.199
- type: recall_at_5
value: 41.071999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.777
- type: map_at_10
value: 25.201
- type: map_at_100
value: 26.423999999999996
- type: map_at_1000
value: 26.544
- type: map_at_3
value: 22.869
- type: map_at_5
value: 24.023
- type: mrr_at_1
value: 21.473
- type: mrr_at_10
value: 29.12
- type: mrr_at_100
value: 30.144
- type: mrr_at_1000
value: 30.215999999999998
- type: mrr_at_3
value: 26.933
- type: mrr_at_5
value: 28.051
- type: ndcg_at_1
value: 21.473
- type: ndcg_at_10
value: 30.003
- type: ndcg_at_100
value: 35.766
- type: ndcg_at_1000
value: 38.501000000000005
- type: ndcg_at_3
value: 25.773000000000003
- type: ndcg_at_5
value: 27.462999999999997
- type: precision_at_1
value: 21.473
- type: precision_at_10
value: 5.482
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.205
- type: precision_at_5
value: 8.692
- type: recall_at_1
value: 17.777
- type: recall_at_10
value: 40.582
- type: recall_at_100
value: 66.305
- type: recall_at_1000
value: 85.636
- type: recall_at_3
value: 28.687
- type: recall_at_5
value: 33.089
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.677
- type: map_at_10
value: 36.309000000000005
- type: map_at_100
value: 37.403999999999996
- type: map_at_1000
value: 37.496
- type: map_at_3
value: 33.382
- type: map_at_5
value: 34.98
- type: mrr_at_1
value: 31.343
- type: mrr_at_10
value: 40.549
- type: mrr_at_100
value: 41.342
- type: mrr_at_1000
value: 41.397
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.451
- type: ndcg_at_1
value: 31.343
- type: ndcg_at_10
value: 42.1
- type: ndcg_at_100
value: 47.089999999999996
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 36.836999999999996
- type: ndcg_at_5
value: 39.21
- type: precision_at_1
value: 31.343
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.915
- type: precision_at_5
value: 11.940000000000001
- type: recall_at_1
value: 26.677
- type: recall_at_10
value: 55.54599999999999
- type: recall_at_100
value: 77.094
- type: recall_at_1000
value: 92.01
- type: recall_at_3
value: 41.191
- type: recall_at_5
value: 47.006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.501
- type: map_at_10
value: 33.102
- type: map_at_100
value: 34.676
- type: map_at_1000
value: 34.888000000000005
- type: map_at_3
value: 29.944
- type: map_at_5
value: 31.613999999999997
- type: mrr_at_1
value: 29.447000000000003
- type: mrr_at_10
value: 37.996
- type: mrr_at_100
value: 38.946
- type: mrr_at_1000
value: 38.995000000000005
- type: mrr_at_3
value: 35.079
- type: mrr_at_5
value: 36.69
- type: ndcg_at_1
value: 29.447000000000003
- type: ndcg_at_10
value: 39.232
- type: ndcg_at_100
value: 45.247
- type: ndcg_at_1000
value: 47.613
- type: ndcg_at_3
value: 33.922999999999995
- type: ndcg_at_5
value: 36.284
- type: precision_at_1
value: 29.447000000000003
- type: precision_at_10
value: 7.648000000000001
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 16.008
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 24.501
- type: recall_at_10
value: 51.18899999999999
- type: recall_at_100
value: 78.437
- type: recall_at_1000
value: 92.842
- type: recall_at_3
value: 35.808
- type: recall_at_5
value: 42.197
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.039
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.275
- type: map_at_1000
value: 31.379
- type: map_at_3
value: 27.98
- type: map_at_5
value: 29.358
- type: mrr_at_1
value: 24.03
- type: mrr_at_10
value: 32.568000000000005
- type: mrr_at_100
value: 33.403
- type: mrr_at_1000
value: 33.475
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 31.796000000000003
- type: ndcg_at_1
value: 24.03
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 39.668
- type: ndcg_at_1000
value: 42.296
- type: ndcg_at_3
value: 30.709999999999997
- type: ndcg_at_5
value: 33.024
- type: precision_at_1
value: 24.03
- type: precision_at_10
value: 5.564
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 13.309000000000001
- type: precision_at_5
value: 9.39
- type: recall_at_1
value: 22.039
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 68.23599999999999
- type: recall_at_1000
value: 87.852
- type: recall_at_3
value: 35.852000000000004
- type: recall_at_5
value: 41.410000000000004
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.692999999999998
- type: map_at_10
value: 26.903
- type: map_at_100
value: 28.987000000000002
- type: map_at_1000
value: 29.176999999999996
- type: map_at_3
value: 22.137
- type: map_at_5
value: 24.758
- type: mrr_at_1
value: 35.57
- type: mrr_at_10
value: 47.821999999999996
- type: mrr_at_100
value: 48.608000000000004
- type: mrr_at_1000
value: 48.638999999999996
- type: mrr_at_3
value: 44.452000000000005
- type: mrr_at_5
value: 46.546
- type: ndcg_at_1
value: 35.57
- type: ndcg_at_10
value: 36.567
- type: ndcg_at_100
value: 44.085
- type: ndcg_at_1000
value: 47.24
- type: ndcg_at_3
value: 29.964000000000002
- type: ndcg_at_5
value: 32.511
- type: precision_at_1
value: 35.57
- type: precision_at_10
value: 11.485
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 22.237000000000002
- type: precision_at_5
value: 17.471999999999998
- type: recall_at_1
value: 15.692999999999998
- type: recall_at_10
value: 43.056
- type: recall_at_100
value: 68.628
- type: recall_at_1000
value: 86.075
- type: recall_at_3
value: 26.918999999999997
- type: recall_at_5
value: 34.14
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.53
- type: map_at_10
value: 20.951
- type: map_at_100
value: 30.136000000000003
- type: map_at_1000
value: 31.801000000000002
- type: map_at_3
value: 15.021
- type: map_at_5
value: 17.471999999999998
- type: mrr_at_1
value: 71.0
- type: mrr_at_10
value: 79.176
- type: mrr_at_100
value: 79.418
- type: mrr_at_1000
value: 79.426
- type: mrr_at_3
value: 78.125
- type: mrr_at_5
value: 78.61200000000001
- type: ndcg_at_1
value: 58.5
- type: ndcg_at_10
value: 44.106
- type: ndcg_at_100
value: 49.268
- type: ndcg_at_1000
value: 56.711999999999996
- type: ndcg_at_3
value: 48.934
- type: ndcg_at_5
value: 45.826
- type: precision_at_1
value: 71.0
- type: precision_at_10
value: 35.0
- type: precision_at_100
value: 11.360000000000001
- type: precision_at_1000
value: 2.046
- type: precision_at_3
value: 52.833
- type: precision_at_5
value: 44.15
- type: recall_at_1
value: 9.53
- type: recall_at_10
value: 26.811
- type: recall_at_100
value: 55.916999999999994
- type: recall_at_1000
value: 79.973
- type: recall_at_3
value: 16.413
- type: recall_at_5
value: 19.980999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.519999999999996
- type: f1
value: 46.36601294761231
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.413
- type: map_at_10
value: 83.414
- type: map_at_100
value: 83.621
- type: map_at_1000
value: 83.635
- type: map_at_3
value: 82.337
- type: map_at_5
value: 83.039
- type: mrr_at_1
value: 80.19800000000001
- type: mrr_at_10
value: 87.715
- type: mrr_at_100
value: 87.778
- type: mrr_at_1000
value: 87.779
- type: mrr_at_3
value: 87.106
- type: mrr_at_5
value: 87.555
- type: ndcg_at_1
value: 80.19800000000001
- type: ndcg_at_10
value: 87.182
- type: ndcg_at_100
value: 87.90299999999999
- type: ndcg_at_1000
value: 88.143
- type: ndcg_at_3
value: 85.60600000000001
- type: ndcg_at_5
value: 86.541
- type: precision_at_1
value: 80.19800000000001
- type: precision_at_10
value: 10.531
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 20.429
- type: recall_at_1
value: 74.413
- type: recall_at_10
value: 94.363
- type: recall_at_100
value: 97.165
- type: recall_at_1000
value: 98.668
- type: recall_at_3
value: 90.108
- type: recall_at_5
value: 92.52
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.701
- type: map_at_10
value: 37.122
- type: map_at_100
value: 39.178000000000004
- type: map_at_1000
value: 39.326
- type: map_at_3
value: 32.971000000000004
- type: map_at_5
value: 35.332
- type: mrr_at_1
value: 44.753
- type: mrr_at_10
value: 53.452
- type: mrr_at_100
value: 54.198
- type: mrr_at_1000
value: 54.225
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.464
- type: ndcg_at_1
value: 44.753
- type: ndcg_at_10
value: 45.021
- type: ndcg_at_100
value: 52.028
- type: ndcg_at_1000
value: 54.596000000000004
- type: ndcg_at_3
value: 41.622
- type: ndcg_at_5
value: 42.736000000000004
- type: precision_at_1
value: 44.753
- type: precision_at_10
value: 12.284
- type: precision_at_100
value: 1.955
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 27.828999999999997
- type: precision_at_5
value: 20.061999999999998
- type: recall_at_1
value: 22.701
- type: recall_at_10
value: 51.432
- type: recall_at_100
value: 77.009
- type: recall_at_1000
value: 92.511
- type: recall_at_3
value: 37.919000000000004
- type: recall_at_5
value: 44.131
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.189
- type: map_at_10
value: 66.24600000000001
- type: map_at_100
value: 67.098
- type: map_at_1000
value: 67.149
- type: map_at_3
value: 62.684
- type: map_at_5
value: 64.974
- type: mrr_at_1
value: 80.378
- type: mrr_at_10
value: 86.127
- type: mrr_at_100
value: 86.29299999999999
- type: mrr_at_1000
value: 86.297
- type: mrr_at_3
value: 85.31400000000001
- type: mrr_at_5
value: 85.858
- type: ndcg_at_1
value: 80.378
- type: ndcg_at_10
value: 74.101
- type: ndcg_at_100
value: 76.993
- type: ndcg_at_1000
value: 77.948
- type: ndcg_at_3
value: 69.232
- type: ndcg_at_5
value: 72.04599999999999
- type: precision_at_1
value: 80.378
- type: precision_at_10
value: 15.595999999999998
- type: precision_at_100
value: 1.7840000000000003
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 44.884
- type: precision_at_5
value: 29.145
- type: recall_at_1
value: 40.189
- type: recall_at_10
value: 77.981
- type: recall_at_100
value: 89.21
- type: recall_at_1000
value: 95.48299999999999
- type: recall_at_3
value: 67.326
- type: recall_at_5
value: 72.863
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.84599999999999
- type: ap
value: 89.4710787567357
- type: f1
value: 92.83752676932258
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.132
- type: map_at_10
value: 35.543
- type: map_at_100
value: 36.702
- type: map_at_1000
value: 36.748999999999995
- type: map_at_3
value: 31.737
- type: map_at_5
value: 33.927
- type: mrr_at_1
value: 23.782
- type: mrr_at_10
value: 36.204
- type: mrr_at_100
value: 37.29
- type: mrr_at_1000
value: 37.330999999999996
- type: mrr_at_3
value: 32.458999999999996
- type: mrr_at_5
value: 34.631
- type: ndcg_at_1
value: 23.782
- type: ndcg_at_10
value: 42.492999999999995
- type: ndcg_at_100
value: 47.985
- type: ndcg_at_1000
value: 49.141
- type: ndcg_at_3
value: 34.748000000000005
- type: ndcg_at_5
value: 38.651
- type: precision_at_1
value: 23.782
- type: precision_at_10
value: 6.665
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.776
- type: precision_at_5
value: 10.84
- type: recall_at_1
value: 23.132
- type: recall_at_10
value: 63.794
- type: recall_at_100
value: 89.027
- type: recall_at_1000
value: 97.807
- type: recall_at_3
value: 42.765
- type: recall_at_5
value: 52.11
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.59188326493388
- type: f1
value: 94.3842594786827
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.49384404924761
- type: f1
value: 59.7580539534629
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.56220578345663
- type: f1
value: 75.27228165561478
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.53463349024884
- type: f1
value: 80.4893958236536
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.56100273484962
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.470380028839607
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.06102792457849
- type: mrr
value: 33.30709199672238
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.776999999999999
- type: map_at_10
value: 14.924000000000001
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.538999999999998
- type: map_at_3
value: 10.982
- type: map_at_5
value: 12.679000000000002
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.232000000000006
- type: mrr_at_100
value: 57.818999999999996
- type: mrr_at_1000
value: 57.847
- type: mrr_at_3
value: 54.901999999999994
- type: mrr_at_5
value: 56.481
- type: ndcg_at_1
value: 46.594
- type: ndcg_at_10
value: 38.129000000000005
- type: ndcg_at_100
value: 35.54
- type: ndcg_at_1000
value: 44.172
- type: ndcg_at_3
value: 43.025999999999996
- type: ndcg_at_5
value: 41.052
- type: precision_at_1
value: 47.988
- type: precision_at_10
value: 28.111000000000004
- type: precision_at_100
value: 8.929
- type: precision_at_1000
value: 2.185
- type: precision_at_3
value: 40.144000000000005
- type: precision_at_5
value: 35.232
- type: recall_at_1
value: 6.776999999999999
- type: recall_at_10
value: 19.289
- type: recall_at_100
value: 36.359
- type: recall_at_1000
value: 67.54
- type: recall_at_3
value: 11.869
- type: recall_at_5
value: 14.999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.108000000000004
- type: map_at_10
value: 47.126000000000005
- type: map_at_100
value: 48.171
- type: map_at_1000
value: 48.199
- type: map_at_3
value: 42.734
- type: map_at_5
value: 45.362
- type: mrr_at_1
value: 34.936
- type: mrr_at_10
value: 49.571
- type: mrr_at_100
value: 50.345
- type: mrr_at_1000
value: 50.363
- type: mrr_at_3
value: 45.959
- type: mrr_at_5
value: 48.165
- type: ndcg_at_1
value: 34.936
- type: ndcg_at_10
value: 55.028999999999996
- type: ndcg_at_100
value: 59.244
- type: ndcg_at_1000
value: 59.861
- type: ndcg_at_3
value: 46.872
- type: ndcg_at_5
value: 51.217999999999996
- type: precision_at_1
value: 34.936
- type: precision_at_10
value: 9.099
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.456
- type: precision_at_5
value: 15.411
- type: recall_at_1
value: 31.108000000000004
- type: recall_at_10
value: 76.53999999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.947
- type: recall_at_3
value: 55.572
- type: recall_at_5
value: 65.525
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.56400000000001
- type: map_at_10
value: 85.482
- type: map_at_100
value: 86.114
- type: map_at_1000
value: 86.13
- type: map_at_3
value: 82.607
- type: map_at_5
value: 84.405
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.304
- type: mrr_at_100
value: 88.399
- type: mrr_at_1000
value: 88.399
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.024
- type: ndcg_at_1
value: 82.45
- type: ndcg_at_10
value: 89.06500000000001
- type: ndcg_at_100
value: 90.232
- type: ndcg_at_1000
value: 90.305
- type: ndcg_at_3
value: 86.375
- type: ndcg_at_5
value: 87.85300000000001
- type: precision_at_1
value: 82.45
- type: precision_at_10
value: 13.486999999999998
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.813
- type: precision_at_5
value: 24.773999999999997
- type: recall_at_1
value: 71.56400000000001
- type: recall_at_10
value: 95.812
- type: recall_at_100
value: 99.7
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 87.966
- type: recall_at_5
value: 92.268
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 57.241876648614145
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.66212576446223
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.308
- type: map_at_10
value: 13.803
- type: map_at_100
value: 16.176
- type: map_at_1000
value: 16.561
- type: map_at_3
value: 9.761000000000001
- type: map_at_5
value: 11.802
- type: mrr_at_1
value: 26.200000000000003
- type: mrr_at_10
value: 37.621
- type: mrr_at_100
value: 38.767
- type: mrr_at_1000
value: 38.815
- type: mrr_at_3
value: 34.117
- type: mrr_at_5
value: 36.107
- type: ndcg_at_1
value: 26.200000000000003
- type: ndcg_at_10
value: 22.64
- type: ndcg_at_100
value: 31.567
- type: ndcg_at_1000
value: 37.623
- type: ndcg_at_3
value: 21.435000000000002
- type: ndcg_at_5
value: 18.87
- type: precision_at_1
value: 26.200000000000003
- type: precision_at_10
value: 11.74
- type: precision_at_100
value: 2.465
- type: precision_at_1000
value: 0.391
- type: precision_at_3
value: 20.033
- type: precision_at_5
value: 16.64
- type: recall_at_1
value: 5.308
- type: recall_at_10
value: 23.794999999999998
- type: recall_at_100
value: 50.015
- type: recall_at_1000
value: 79.283
- type: recall_at_3
value: 12.178
- type: recall_at_5
value: 16.882
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.93231134675553
- type: cos_sim_spearman
value: 81.68319292603205
- type: euclidean_pearson
value: 81.8396814380367
- type: euclidean_spearman
value: 81.24641903349945
- type: manhattan_pearson
value: 81.84698799204274
- type: manhattan_spearman
value: 81.24269997904105
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.73241671587446
- type: cos_sim_spearman
value: 79.05091082971826
- type: euclidean_pearson
value: 83.91146869578044
- type: euclidean_spearman
value: 79.87978465370936
- type: manhattan_pearson
value: 83.90888338917678
- type: manhattan_spearman
value: 79.87482848584241
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.14970731146177
- type: cos_sim_spearman
value: 86.37363490084627
- type: euclidean_pearson
value: 83.02154218530433
- type: euclidean_spearman
value: 83.80258761957367
- type: manhattan_pearson
value: 83.01664495119347
- type: manhattan_spearman
value: 83.77567458007952
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.40474139886784
- type: cos_sim_spearman
value: 82.77768789165984
- type: euclidean_pearson
value: 80.7065877443695
- type: euclidean_spearman
value: 81.375940662505
- type: manhattan_pearson
value: 80.6507552270278
- type: manhattan_spearman
value: 81.32782179098741
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.08585968722274
- type: cos_sim_spearman
value: 88.03110031451399
- type: euclidean_pearson
value: 85.74012019602384
- type: euclidean_spearman
value: 86.13592849438209
- type: manhattan_pearson
value: 85.74404842369206
- type: manhattan_spearman
value: 86.14492318960154
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.95069052788875
- type: cos_sim_spearman
value: 86.4867991595147
- type: euclidean_pearson
value: 84.31013325754635
- type: euclidean_spearman
value: 85.01529258006482
- type: manhattan_pearson
value: 84.26995570085374
- type: manhattan_spearman
value: 84.96982104986162
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.54617647971897
- type: cos_sim_spearman
value: 87.49834181751034
- type: euclidean_pearson
value: 86.01015322577122
- type: euclidean_spearman
value: 84.63362652063199
- type: manhattan_pearson
value: 86.13807574475706
- type: manhattan_spearman
value: 84.7772370721132
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.20047755786615
- type: cos_sim_spearman
value: 67.05324077987636
- type: euclidean_pearson
value: 66.91930642976601
- type: euclidean_spearman
value: 65.21491856099105
- type: manhattan_pearson
value: 66.78756851976624
- type: manhattan_spearman
value: 65.12356257740728
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.19852871539686
- type: cos_sim_spearman
value: 87.5161895296395
- type: euclidean_pearson
value: 84.59848645207485
- type: euclidean_spearman
value: 85.26427328757919
- type: manhattan_pearson
value: 84.59747366996524
- type: manhattan_spearman
value: 85.24045855146915
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.63320317811032
- type: mrr
value: 96.26242947321379
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.928000000000004
- type: map_at_10
value: 70.112
- type: map_at_100
value: 70.59299999999999
- type: map_at_1000
value: 70.623
- type: map_at_3
value: 66.846
- type: map_at_5
value: 68.447
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 71.212
- type: mrr_at_100
value: 71.616
- type: mrr_at_1000
value: 71.64500000000001
- type: mrr_at_3
value: 68.77799999999999
- type: mrr_at_5
value: 70.094
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 74.607
- type: ndcg_at_100
value: 76.416
- type: ndcg_at_1000
value: 77.102
- type: ndcg_at_3
value: 69.126
- type: ndcg_at_5
value: 71.41300000000001
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 9.933
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.556
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 60.928000000000004
- type: recall_at_10
value: 87.322
- type: recall_at_100
value: 94.833
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.428
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86237623762376
- type: cos_sim_ap
value: 96.72586477206649
- type: cos_sim_f1
value: 93.01858362631845
- type: cos_sim_precision
value: 93.4409687184662
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.78019801980199
- type: dot_ap
value: 93.72748205246228
- type: dot_f1
value: 89.04109589041096
- type: dot_precision
value: 87.16475095785441
- type: dot_recall
value: 91.0
- type: euclidean_accuracy
value: 99.85445544554456
- type: euclidean_ap
value: 96.6661459876145
- type: euclidean_f1
value: 92.58337481333997
- type: euclidean_precision
value: 92.17046580773042
- type: euclidean_recall
value: 93.0
- type: manhattan_accuracy
value: 99.85445544554456
- type: manhattan_ap
value: 96.6883549244056
- type: manhattan_f1
value: 92.57598405580468
- type: manhattan_precision
value: 92.25422045680239
- type: manhattan_recall
value: 92.9
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.72586477206649
- type: max_f1
value: 93.01858362631845
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.39930057069995
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96398659903402
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.946944700355395
- type: mrr
value: 56.97151398438164
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.541657650692905
- type: cos_sim_spearman
value: 31.605804192286303
- type: dot_pearson
value: 28.26905996736398
- type: dot_spearman
value: 27.864801765851187
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.8870000000000002
- type: map_at_100
value: 9.78
- type: map_at_1000
value: 22.514
- type: map_at_3
value: 0.6669999999999999
- type: map_at_5
value: 1.077
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 89.86699999999999
- type: mrr_at_100
value: 89.86699999999999
- type: mrr_at_1000
value: 89.86699999999999
- type: mrr_at_3
value: 89.667
- type: mrr_at_5
value: 89.667
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 74.818
- type: ndcg_at_100
value: 53.715999999999994
- type: ndcg_at_1000
value: 47.082
- type: ndcg_at_3
value: 82.134
- type: ndcg_at_5
value: 79.81899999999999
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 54.48
- type: precision_at_1000
value: 20.518
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 2.072
- type: recall_at_100
value: 13.013
- type: recall_at_1000
value: 43.462
- type: recall_at_3
value: 0.695
- type: recall_at_5
value: 1.139
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.328
- type: map_at_10
value: 9.795
- type: map_at_100
value: 15.801000000000002
- type: map_at_1000
value: 17.23
- type: map_at_3
value: 4.734
- type: map_at_5
value: 6.644
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 46.902
- type: mrr_at_100
value: 47.495
- type: mrr_at_1000
value: 47.495
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 44.218
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 24.806
- type: ndcg_at_100
value: 36.419000000000004
- type: ndcg_at_1000
value: 47.272999999999996
- type: ndcg_at_3
value: 25.666
- type: ndcg_at_5
value: 25.448999999999998
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 23.061
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.484
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.328
- type: recall_at_10
value: 16.524
- type: recall_at_100
value: 47.179
- type: recall_at_1000
value: 81.22200000000001
- type: recall_at_3
value: 5.745
- type: recall_at_5
value: 9.339
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9142
- type: ap
value: 14.335574772555415
- type: f1
value: 54.62839595194111
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.94340690435768
- type: f1
value: 60.286487936731916
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.26597708987974
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.48882398521786
- type: cos_sim_ap
value: 79.04326607602204
- type: cos_sim_f1
value: 71.64566826860633
- type: cos_sim_precision
value: 70.55512918905092
- type: cos_sim_recall
value: 72.77044854881267
- type: dot_accuracy
value: 84.19264469213805
- type: dot_ap
value: 67.96360043562528
- type: dot_f1
value: 64.06418393006827
- type: dot_precision
value: 58.64941898706424
- type: dot_recall
value: 70.58047493403694
- type: euclidean_accuracy
value: 87.45902127913214
- type: euclidean_ap
value: 78.9742237648272
- type: euclidean_f1
value: 71.5553235908142
- type: euclidean_precision
value: 70.77955601445535
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.41729749061214
- type: manhattan_ap
value: 78.90073137580596
- type: manhattan_f1
value: 71.3942611553533
- type: manhattan_precision
value: 68.52705653967483
- type: manhattan_recall
value: 74.51187335092348
- type: max_accuracy
value: 87.48882398521786
- type: max_ap
value: 79.04326607602204
- type: max_f1
value: 71.64566826860633
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.68125897465751
- type: cos_sim_ap
value: 85.6003454431979
- type: cos_sim_f1
value: 77.6957163958641
- type: cos_sim_precision
value: 73.0110366307807
- type: cos_sim_recall
value: 83.02279026793964
- type: dot_accuracy
value: 87.7672992587418
- type: dot_ap
value: 82.4971301112899
- type: dot_f1
value: 75.90528233151184
- type: dot_precision
value: 72.0370626469368
- type: dot_recall
value: 80.21250384970742
- type: euclidean_accuracy
value: 88.4503434625684
- type: euclidean_ap
value: 84.91949884748384
- type: euclidean_f1
value: 76.92365018444684
- type: euclidean_precision
value: 74.53245721712759
- type: euclidean_recall
value: 79.47336002463813
- type: manhattan_accuracy
value: 88.47556952691427
- type: manhattan_ap
value: 84.8963689101517
- type: manhattan_f1
value: 76.85901249256395
- type: manhattan_precision
value: 74.31693989071039
- type: manhattan_recall
value: 79.58115183246073
- type: max_accuracy
value: 88.68125897465751
- type: max_ap
value: 85.6003454431979
- type: max_f1
value: 77.6957163958641
license: mit
language:
- en
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
- 09/12/2023: New Release:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval\* |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
\*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
\**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details pelease refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
## Citation
If you find our work helpful, please cite us:
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
nitinbhayana/Llama-2-7b-chat-hf-adapter-keyword-category-brand-v1
|
nitinbhayana
| 2023-10-09T13:16:54Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-10-09T07:44:44Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
def generate_prompt(keyword):
# Define the roles and markers
B_SYS, E_SYS = "<<SYS>>", "<</SYS>>"
B_INST, E_INST = "[INST]", "[/INST]"
catlist=['Baby Products', 'Bags, Wallets and Luggage', 'Beauty', 'Books', 'Car & Motorbike', 'Clothing & Accessories', 'Computers & Accessories', 'Electronics', 'Garden & Outdoors', 'Gift Cards', 'Grocery & Gourmet Foods', 'Health & Personal Care', 'Home & Kitchen', 'Home Improvement', 'Industrial & Scientific', 'Jewellery', 'Kindle Store', 'Movies & TV Shows', 'Music', 'Musical Instruments', 'Office Products', 'Pet Supplies', 'Shoes & Handbags', 'Software', 'Sports, Fitness & Outdoors', 'Toys & Games', 'Video Games', 'Watches']
#catlist=['hair shampoo' , 'other category' ]
# Format your prompt template
prompt = f"""{B_INST} {B_SYS}\nYou are a helpful assistant that provides accurate and concise responses. Do not hallucinate.\n{E_SYS}
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Analyze the following keyword searched on amazon with intent of shopping. Identify the product category from the list {catlist}. Extract the brand from keyword related to brand loyalty intent. Output in JSON with keyword, product category, brand as keys.
### Input:
{keyword.strip()}
{E_INST}\n\n"""
return(prompt)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
247labs/Llama-2-7b-Verse-Bot
|
247labs
| 2023-10-09T12:56:11Z | 39 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"facebook",
"meta",
"pytorch",
"llama-2",
"text-generation",
"en",
"arxiv:2307.09288",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:quantized:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] |
text-generation
| 2023-10-09T12:53:42Z |
---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 7B Chat
arxiv: 2307.09288
base_model: meta-llama/Llama-2-7b-chat-hf
inference: false
model_creator: Meta Llama 2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B Chat - GGUF
- Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
- Original model: [Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta Llama 2's Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF)
* [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-7b-Chat-GGUF and below it, a specific filename to download, such as: llama-2-7b-chat.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-7b-chat.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GGUF", model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta Llama 2's Llama 2 7B Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
<!-- original-model-card end -->
|
yeseul16/flan-t5-large-financial-phrasebank-lora
|
yeseul16
| 2023-10-09T12:56:03Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | 2023-10-09T12:55:56Z |
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
kerwin7/q-Taxi-v3
|
kerwin7
| 2023-10-09T12:46:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T12:46:02Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kerwin7/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kerwin7/q-FrozenLake-v1-4x4-noSlippery
|
kerwin7
| 2023-10-09T12:43:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T12:43:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kerwin7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vonewman/distilbert-base-uncased-finetuned-emotion
|
vonewman
| 2023-10-09T12:21:16Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-12T21:50:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.921
name: Accuracy
- type: f1
value: 0.9212108560221193
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Accuracy: 0.921
- F1: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8094 | 1.0 | 250 | 0.3157 | 0.9055 | 0.9009 |
| 0.2462 | 2.0 | 500 | 0.2217 | 0.921 | 0.9212 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned-kin-sent3
|
RogerB
| 2023-10-09T12:18:52Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned",
"base_model:finetune:RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T12:05:49Z |
---
base_model: RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned-kin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned-kin-sent3
This model is a fine-tuned version of [RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned](https://huggingface.co/RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9070
- F1: 0.6423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9723 | 1.0 | 1013 | 0.7844 | 0.6695 |
| 0.8283 | 2.0 | 2026 | 0.7035 | 0.7217 |
| 0.7637 | 3.0 | 3039 | 0.6766 | 0.7352 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
waldie/Mistral-11B-Airoboros-RP-v1-8bpw-h8-exl2
|
waldie
| 2023-10-09T12:01:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-09T09:31:27Z |
quant of [Undi95's](https://huggingface.co/Undi95) [Mistral-11B-Airoboros-RP-v1](https://huggingface.co/Undi95/Mistral-11B-Airoboros-RP-v1)
```
python3 convert.py \
-i /input/Undi95_Mistral-11B-Airoboros-RP-v1/ \
-c /input/wikitext/0000.parquet \
-o /output/temp/ \
-cf /output/8bpw/ \
-b 8.0 \
-hb 8
```
|
RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned-kin-sent2
|
RogerB
| 2023-10-09T11:58:58Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned",
"base_model:finetune:RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T11:47:58Z |
---
base_model: RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned-kin-sent2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned-kin-sent2
This model is a fine-tuned version of [RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned](https://huggingface.co/RogerB/xlm-roberta-base-finetuned-kinyarwanda-kinre-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9031
- F1: 0.6365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9883 | 1.0 | 1013 | 0.8077 | 0.6508 |
| 0.854 | 2.0 | 2026 | 0.7312 | 0.6994 |
| 0.7908 | 3.0 | 3039 | 0.7085 | 0.7199 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
RogerB/xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned-kin-sent3
|
RogerB
| 2023-10-09T11:56:44Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned",
"base_model:finetune:RogerB/xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-06T17:26:05Z |
---
base_model: RogerB/xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned-kin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned-kin-sent3
This model is a fine-tuned version of [RogerB/xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned](https://huggingface.co/RogerB/xlm-roberta-base-finetuned-kinyarwanda-kin-finetuned-kin-tweets-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9069
- F1: 0.6306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9786 | 1.0 | 1013 | 0.7879 | 0.6651 |
| 0.8286 | 2.0 | 2026 | 0.6928 | 0.7207 |
| 0.7596 | 3.0 | 3039 | 0.6700 | 0.7367 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
szdodo/taxi
|
szdodo
| 2023-10-09T11:49:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T11:43:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="szdodo/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
szdodo/frozenlake
|
szdodo
| 2023-10-09T11:40:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-09T11:40:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: frozenlake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="szdodo/frozenlake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
iandennismiller/mistral-v0.1-7b-GGUF
|
iandennismiller
| 2023-10-09T11:38:01Z | 23 | 0 | null |
[
"gguf",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2023-09-27T14:52:53Z |
---
base_model: https://huggingface.co/advanced-stack/MistralAI-v0.1-GGUF
inference: false
license: cc-by-nc-4.0
model_creator: mistral.ai
model_name: Mistral v0.1
model_type: llama
prompt_template: '{prompt}'
quantized_by: iandennismiller
pipeline_tag: text-generation
tags:
- mistral
---
# Mistral 7b
## Support for `calm`
These models support the [calm]() language model runner.
The particular quants selected for this repo are in support of [calm](https://github.com/iandennismiller/calm), which is a language model runner that automatically uses the right prompts, templates, context size, etc.
## From https://mistral.ai
> Mistral-7B-v0.1 is a small, yet powerful model adaptable to many use-cases. Mistral 7B is better than Llama 2 13B on all benchmarks, has natural coding abilities, and 8k sequence length. It’s released under Apache 2.0 licence. We made it easy to deploy on any cloud, and of course on your gaming GPU
More info: https://mistral.ai/news/announcing-mistral-7b/
|
maywell/Synatra-V0.1-7B-Instruct-GGUF
|
maywell
| 2023-10-09T11:35:43Z | 6 | 2 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-09T10:55:06Z |
---
license: cc-by-nc-4.0
---
This model follows cc-by-nc-4.0 license **STRICTLY.**
|
kandinsky-community/kandinsky-2-2-prior
|
kandinsky-community
| 2023-10-09T11:33:28Z | 25,670 | 50 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"kandinsky",
"license:apache-2.0",
"diffusers:KandinskyV22PriorPipeline",
"region:us"
] |
text-to-image
| 2023-06-09T13:37:11Z |
---
license: apache-2.0
tags:
- text-to-image
- kandinsky
inference: false
---
# Kandinsky 2.2
Kandinsky inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.2 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "portrait of a young women, blue eyes, cinematic"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale =1.0, height=768, width=768).images[0]
image.save("portrait.png")
```

### Text Guided Image-to-Image Generation
```python
from PIL import Image
import requests
from io import BytesIO
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image = original_image.resize((768, 512))
```

```python
from diffusers import AutoPipelineForImage2Image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, image=original_image, strength=0.3, height=768, width=768).images[0]
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(**prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

### Text Guided Inpainting Generation
```python
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch
import numpy as np
pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "a hat"
init_image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
mask = np.zeros((768, 768), dtype=np.float32)
# Let's mask out an area above the cat's head
mask[:250, 250:-250] = 1
out = pipe(
prompt=prompt,
image=init_image,
mask_image=mask,
height=768,
width=768,
num_inference_steps=150,
)
image = out.images[0]
image.save("cat_with_hat.png")
```

__<font color=red>Breaking change on the mask input:</font>__
We introduced a breaking change for Kandinsky inpainting pipeline in the following pull request: https://github.com/huggingface/diffusers/pull/4207. Previously we accepted a mask format where black pixels represent the masked-out area. We have changed to use white pixels to represent masks instead in order to have a unified mask format across all our pipelines.
Please upgrade your inpainting code to follow the above. If you are using Kandinsky Inpaint in production. You now need to change the mask to:
```python
# For PIL input
import PIL.ImageOps
mask = PIL.ImageOps.invert(mask)
# For PyTorch and Numpy input
mask = 1 - mask
```
### Text-to-Image Generation with ControlNet Conditioning
```python
import torch
import numpy as np
from transformers import pipeline
from diffusers.utils import load_image
from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline
# let's take an image and extract its depth map.
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))
# We can use the `depth-estimation` pipeline from transformers to process the image and retrieve its depth map.
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
# Now, we load the prior pipeline and the text-to-image controlnet pipeline
pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior = pipe_prior.to("cuda")
pipe = KandinskyV22ControlnetPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
# We pass the prompt and negative prompt through the prior to generate image embeddings
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
generator = torch.Generator(device="cuda").manual_seed(43)
image_emb, zero_image_emb = pipe_prior(
prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator
).to_tuple()
# Now we can pass the image embeddings and the depth image we extracted to the controlnet pipeline. With Kandinsky 2.2, only prior pipelines accept `prompt` input. You do not need to pass the prompt to the controlnet pipeline.
images = pipe(
image_embeds=image_emb,
negative_image_embeds=zero_image_emb,
hint=hint,
num_inference_steps=50,
generator=generator,
height=768,
width=768,
).images
images[0].save("robot_cat.png")
```


### Image-to-Image Generation with ControlNet Conditioning
```python
import torch
import numpy as np
from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline
from diffusers.utils import load_image
from transformers import pipeline
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinskyv22/cat.png"
).resize((768, 768))
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior = pipe_prior.to("cuda")
pipe = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
generator = torch.Generator(device="cuda").manual_seed(43)
# run prior pipeline
img_emb = pipe_prior(prompt=prompt, image=img, strength=0.85, generator=generator)
negative_emb = pipe_prior(prompt=negative_prior_prompt, image=img, strength=1, generator=generator)
# run controlnet img2img pipeline
images = pipe(
image=img,
strength=0.5,
image_embeds=img_emb.image_embeds,
negative_image_embeds=negative_emb.image_embeds,
hint=hint,
num_inference_steps=50,
generator=generator,
height=768,
width=768,
).images
images[0].save("robot_cat.png")
```
Here is the output. Compared with the output from our text-to-image controlnet example, it kept a lot more cat facial details from the original image and worked into the robot style we asked for.

## Model Architecture
### Overview
Kandinsky 2.2 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [CLIP-ViT-G model](https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K). The trained image prior model is then used to generate CLIP image embeddings for input text prompts. Both the input text prompts and its CLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) and then fine-tuned with a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
The main change in Kandinsky 2.2 is the replacement of CLIP-ViT-G. Its image encoder significantly increases the model's capability to generate more aesthetic pictures and better understand text, thus enhancing its overall performance.
Due to the switch CLIP model, the image prior model was retrained, and the Text2Image diffusion model was fine-tuned for 2000 iterations. Kandinsky 2.2 was trained on data of various resolutions, from 512 x 512 to 1536 x 1536, and also as different aspect ratios. As a result, Kandinsky 2.2 can generate 1024 x 1024 outputs with any aspect ratio.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.2,
title = {kandinsky 2.2},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
kandinsky-community/kandinsky-2-1
|
kandinsky-community
| 2023-10-09T11:33:20Z | 143,893 | 38 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"kandinsky",
"license:apache-2.0",
"diffusers:KandinskyPipeline",
"region:us"
] |
text-to-image
| 2023-05-24T09:52:07Z |
---
license: apache-2.0
prior:
- kandinsky-community/kandinsky-2-1-prior
tags:
- text-to-image
- kandinsky
inference: false
---
# Kandinsky 2.1
Kandinsky 2.1 inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.1 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale =1.0, height=768, width=768).images[0]
image.save("cheeseburger_monster.png")
```

### Text Guided Image-to-Image Generation
```python
from diffusers import AutoPipelineForImage2Image
import torch
import requests
from io import BytesIO
from PIL import Image
import os
pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image.thumbnail((768, 768))
image = pipe(prompt=prompt, image=original_image, strength=0.3).images[0]
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyPriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt, **prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

## Model Architecture
### Overview
Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.1,
title = {kandinsky 2.1},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
kandinsky-community/kandinsky-2-2-decoder
|
kandinsky-community
| 2023-10-09T11:32:52Z | 14,105 | 60 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"kandinsky",
"license:apache-2.0",
"diffusers:KandinskyV22Pipeline",
"region:us"
] |
text-to-image
| 2023-06-09T11:17:35Z |
---
license: apache-2.0
prior:
- kandinsky-community/kandinsky-2-2-prior
tags:
- text-to-image
- kandinsky
inference: false
---
# Kandinsky 2.2
Kandinsky inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.2 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "portrait of a young women, blue eyes, cinematic"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale =1.0, height=768, width=768).images[0]
image.save("portrait.png")
```

### Text Guided Image-to-Image Generation
```python
from PIL import Image
import requests
from io import BytesIO
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image = original_image.resize((768, 512))
```

```python
from diffusers import AutoPipelineForImage2Image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, image=original_image, strength=0.3, height=768, width=768).images[0]
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(**prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

## Model Architecture
### Overview
Kandinsky 2.2 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [CLIP-ViT-G model](https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K). The trained image prior model is then used to generate CLIP image embeddings for input text prompts. Both the input text prompts and its CLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) and then fine-tuned with a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
The main change in Kandinsky 2.2 is the replacement of CLIP-ViT-G. Its image encoder significantly increases the model's capability to generate more aesthetic pictures and better understand text, thus enhancing its overall performance.
Due to the switch CLIP model, the image prior model was retrained, and the Text2Image diffusion model was fine-tuned for 2000 iterations. Kandinsky 2.2 was trained on data of various resolutions, from 512 x 512 to 1536 x 1536, and also as different aspect ratios. As a result, Kandinsky 2.2 can generate 1024 x 1024 outputs with any aspect ratio.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.2,
title = {kandinsky 2.2},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
ifaouibadi/Llama-2-7b-Verse-Chat
|
ifaouibadi
| 2023-10-09T11:32:41Z | 38 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"facebook",
"meta",
"pytorch",
"llama-2",
"text-generation",
"en",
"arxiv:2307.09288",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:quantized:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] |
text-generation
| 2023-10-09T11:32:41Z |
---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 7B Chat
arxiv: 2307.09288
base_model: meta-llama/Llama-2-7b-chat-hf
inference: false
model_creator: Meta Llama 2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B Chat - GGUF
- Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
- Original model: [Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta Llama 2's Llama 2 7B Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF)
* [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-7b-Chat-GGUF and below it, a specific filename to download, such as: llama-2-7b-chat.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-7b-chat.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GGUF", model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta Llama 2's Llama 2 7B Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
<!-- original-model-card end -->
|
CodeNLP/pdn2_v08_kpwr_ner_n82
|
CodeNLP
| 2023-10-09T11:30:18Z | 10 | 1 |
transformers
|
[
"transformers",
"bert",
"feature-extraction",
"ner",
"poldeepner2",
"token-classification",
"pl",
"dataset:clarin-pl/kpwr-ner",
"license:mit",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-09T09:58:03Z |
---
license: mit
datasets:
- clarin-pl/kpwr-ner
language:
- pl
pipeline_tag: token-classification
tags:
- ner
- poldeepner2
---
## About
A transformer-based model for named entity recognition for Polish.
The model was trained on the train part of the KPWr corpus to recognize
82 categories of NE.
## Evaluation summary
```
precision recall f1-score support
micro avg 0.7803 0.8033 0.7916 17711
macro avg 0.7921 0.8033 0.7932 17711
```
Detailed results can be found in the test_results.txt file.
## Author
Michał Marcińczuk <marcinczuk@gmail.com>
## License
[MIT](https://choosealicense.com/licenses/mit/
|
softaken/exchange-server-backup-and-restore
|
softaken
| 2023-10-09T11:28:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-09T11:25:34Z |
Softaken have introduced an Exchange Server Backup & Restore tool for the backup of multiple mailboxes like primary, archive, and public folders. This tool is tested by experts and professionals. Its working steps are so smooth users without technical training can thoroughly utilize its operating system. Therefore it performs a fast secure conversion to ensure your data's complete safety during the backup process. Users can use this software for official as well as personal purposes. You can opt for this tool without hesitation. Moreover, users can go for its demo version to check its functionality more deeply. Thus you will get entire technical assistance from the technical support team. Do you know that this tool has many versatile features, because of its astonishing features this software is being approached by many users.
We have highlighted some of them in the below points.
• It can save backups of Exchange mailboxes into PST files.
• It will automatically show you a complete report of your backup process.
• Without any complication you can view PST files on all MS Outlook editions.
• With this software you can restore the mailbox in MSG, EML, and superior format.
• It's the plus point is it immoderately maintains integrity.
• There are no more requirements for Outlook to work with Exchange Backup or restore.
Visit Here:- https://www.softaken.com/exchange-server-backup-restore
|
Zamoranesis/delirium_roberta
|
Zamoranesis
| 2023-10-09T11:23:52Z | 137 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"healthcare",
"medical",
"pharma",
"surgery",
"base_model:allenai/biomed_roberta_base",
"base_model:finetune:allenai/biomed_roberta_base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-09T10:57:18Z |
---
base_model: allenai/biomed_roberta_base
tags:
- healthcare
- medical
- pharma
- surgery
model-index:
- name: delirium_roberta
results: []
widget:
- text: The patient has a clinical history of herniated disc, glioblastoma operated on last year and will undergo temporal malignant neoplasty surgery. The patient's diagnosis is malignant <mask> of temporal lobe
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# delirium_roberta
This model is a fine-tuned version of [allenai/biomed_roberta_base](https://huggingface.co/allenai/biomed_roberta_base).
It achieves the following results on the evaluation set:
- Loss: 0.3709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2088 | 0.4 | 100 | 0.8023 |
| 0.8075 | 0.8 | 200 | 0.7029 |
| 0.7404 | 1.2 | 300 | 0.6575 |
| 0.6826 | 1.6 | 400 | 0.6096 |
| 0.6578 | 2.0 | 500 | 0.5995 |
| 0.6525 | 2.4 | 600 | 0.5834 |
| 0.6223 | 2.8 | 700 | 0.5650 |
| 0.6 | 3.2 | 800 | 0.5464 |
| 0.5807 | 3.6 | 900 | 0.5312 |
| 0.5963 | 4.0 | 1000 | 0.5233 |
| 0.584 | 4.4 | 1100 | 0.5154 |
| 0.5508 | 4.8 | 1200 | 0.5049 |
| 0.5609 | 5.2 | 1300 | 0.4960 |
| 0.5397 | 5.6 | 1400 | 0.4851 |
| 0.5401 | 6.0 | 1500 | 0.4805 |
| 0.513 | 6.4 | 1600 | 0.4690 |
| 0.5247 | 6.8 | 1700 | 0.4647 |
| 0.5228 | 7.2 | 1800 | 0.4607 |
| 0.5142 | 7.6 | 1900 | 0.4534 |
| 0.5055 | 8.0 | 2000 | 0.4428 |
| 0.4942 | 8.4 | 2100 | 0.4338 |
| 0.4895 | 8.8 | 2200 | 0.4336 |
| 0.4874 | 9.2 | 2300 | 0.4221 |
| 0.4744 | 9.6 | 2400 | 0.4234 |
| 0.4743 | 10.0 | 2500 | 0.4139 |
| 0.4816 | 10.4 | 2600 | 0.4090 |
| 0.4733 | 10.8 | 2700 | 0.4077 |
| 0.4419 | 11.2 | 2800 | 0.4035 |
| 0.4552 | 11.6 | 2900 | 0.3989 |
| 0.4467 | 12.0 | 3000 | 0.3913 |
| 0.45 | 12.4 | 3100 | 0.3884 |
| 0.4551 | 12.8 | 3200 | 0.3864 |
| 0.4247 | 13.2 | 3300 | 0.3786 |
| 0.4432 | 13.6 | 3400 | 0.3874 |
| 0.4086 | 14.0 | 3500 | 0.3776 |
| 0.4308 | 14.4 | 3600 | 0.3711 |
| 0.4293 | 14.8 | 3700 | 0.3763 |
| 0.4235 | 15.2 | 3800 | 0.3733 |
| 0.4138 | 15.6 | 3900 | 0.3758 |
| 0.4156 | 16.0 | 4000 | 0.3709 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
shubhamgantayat/gpt2-wet-strength-model
|
shubhamgantayat
| 2023-10-09T11:17:52Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-09T11:12:27Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wet-strength-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wet-strength-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
hwangsoyoon/20231009
|
hwangsoyoon
| 2023-10-09T11:09:56Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2023-10-09T10:50:25Z |
---
license: unknown
---
from huggingface_hub import notebook_login
notebook_login()
model.push_to_hub("ybelkada/flan-t5-large-financial-phrasebank-lora", use_auth_token=True)
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
peft_model_id = "ybelkada/flan-t5-large-financial-phrasebank-lora"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
input_text = "In January-September 2009 , the Group 's net interest income increased to EUR 112.4 mn from EUR 74.3 mn in January-September 2008 ."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
print("input sentence: ", input_text)
print(" output prediction: ", tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
|
imdatta0/mistral-qsn
|
imdatta0
| 2023-10-09T11:06:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-09T11:06:25Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
jluckyboyj/hoa-7b-test-exam-12-1
|
jluckyboyj
| 2023-10-09T11:03:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-09T10:53:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
soyee/classifier
|
soyee
| 2023-10-09T10:58:06Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T10:51:20Z |
---
base_model: klue/roberta-base
tags:
- generated_from_trainer
model-index:
- name: classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
seorin/flan-t5
|
seorin
| 2023-10-09T10:56:57Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | 2023-10-09T10:44:50Z |
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Vaddd/Pyramids
|
Vaddd
| 2023-10-09T10:51:09Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-10-09T10:51:06Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Vaddd/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5
|
quastrinos
| 2023-10-09T10:46:16Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta-v2",
"multiple-choice",
"generated_from_keras_callback",
"base_model:quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v4",
"base_model:finetune:quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v4",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-10-09T10:45:19Z |
---
license: mit
base_model: quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v4
tags:
- generated_from_keras_callback
model-index:
- name: race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v5
This model is a fine-tuned version of [quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v4](https://huggingface.co/quastrinos/race-openbook-finetuned-deberta-v3-large-mcqa-TPU-v4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6490
- Validation Loss: 0.9673
- Train Map3: 0.7783
- Train Lr: 5.0733553e-11
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': 0.001, 'clipnorm': 1, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'CosineDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 312, 'alpha': 5e-09, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Train Map3 | Train Lr | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 0.6490 | 0.9673 | 0.7783 | 5.0733553e-11 | 0 |
### Framework versions
- Transformers 4.35.0.dev0
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
fearvel/CutifiedAnimeCharacterDesign_v2
|
fearvel
| 2023-10-09T10:45:02Z | 3 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"StableDiffusionPipeline",
"stable-diffusion-diffusers",
"region:us"
] |
text-to-image
| 2023-10-09T10:22:45Z |
---
tags:
- stable-diffusion
- text-to-image
- diffusers
- StableDiffusionPipeline
- stable-diffusion-diffusers
---
## Model

|
fearvel/CutifiedAnimeCharacterDesign_v3
|
fearvel
| 2023-10-09T10:44:15Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"StableDiffusionPipeline",
"stable-diffusion-diffusers",
"region:us"
] |
text-to-image
| 2023-10-09T10:22:50Z |
---
tags:
- stable-diffusion
- text-to-image
- diffusers
- StableDiffusionPipeline
- stable-diffusion-diffusers
---
## Model

|
RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned-kin-sent3
|
RogerB
| 2023-10-09T10:43:21Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned",
"base_model:finetune:RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T10:37:27Z |
---
base_model: RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned-kin-sent3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned-kin-sent3
This model is a fine-tuned version of [RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned](https://huggingface.co/RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8827
- F1: 0.6589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8904 | 1.0 | 1013 | 0.6720 | 0.7155 |
| 0.7007 | 2.0 | 2026 | 0.5533 | 0.7772 |
| 0.6036 | 3.0 | 3039 | 0.5293 | 0.7958 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
olonok/olonok_qa_model
|
olonok
| 2023-10-09T10:42:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-09T09:58:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: olonok_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# olonok_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4438 |
| 2.7937 | 2.0 | 500 | 1.8824 |
| 2.7937 | 3.0 | 750 | 1.8131 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mrabhi0505/h2ogpt-16k-codellama-7b-trained-model1
|
mrabhi0505
| 2023-10-09T10:41:46Z | 7 | 1 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-10-04T20:54:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
AnjanaSivan/my-pet-cat-zxc
|
AnjanaSivan
| 2023-10-09T10:40:41Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-09T10:36:41Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-zxc Dreambooth model trained by AnjanaSivan following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: ISSAT-153
Sample pictures of this concept:

|
JaeYoonee/Finetune
|
JaeYoonee
| 2023-10-09T10:39:50Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | 2023-10-09T10:09:23Z |
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Trupthi/my-pet-dog
|
Trupthi
| 2023-10-09T10:39:43Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-09T08:48:59Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Trupthi following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: Anurag-HYD-98
Sample pictures of this concept:

|
Wimplex/pythia-1b-ft-math
|
Wimplex
| 2023-10-09T10:39:24Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:finetune:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | 2023-10-09T09:01:11Z |
---
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- generated_from_trainer
model-index:
- name: pythia-1b-ft-math
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-1b-ft-math
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
desarrolloasesoreslocales/SetFitPrueba
|
desarrolloasesoreslocales
| 2023-10-09T10:37:54Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-10-03T10:24:27Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# desarrolloasesoreslocales/SetFitPrueba
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("desarrolloasesoreslocales/SetFitPrueba")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
legacy107/adapter-flan-t5-large-bottleneck-adapter-covidqa2-100
|
legacy107
| 2023-10-09T10:37:22Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"t5",
"dataset:minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-90_10",
"region:us"
] | null | 2023-10-09T09:59:57Z |
---
tags:
- t5
- adapter-transformers
datasets:
- minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-90_10
---
# Adapter `legacy107/adapter-flan-t5-large-bottleneck-adapter-covidqa2-100` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-90_10](https://huggingface.co/datasets/minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-90_10/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("legacy107/adapter-flan-t5-large-bottleneck-adapter-covidqa2-100", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned-kin-sent2
|
RogerB
| 2023-10-09T10:31:24Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned",
"base_model:finetune:RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T10:24:44Z |
---
base_model: RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned-kin-sent2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned-kin-sent2
This model is a fine-tuned version of [RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned](https://huggingface.co/RogerB/kinyaRoberta-large-kinte-finetuned-kinre-tweet-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9014
- F1: 0.6530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1000000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.886 | 1.0 | 1013 | 0.6714 | 0.7189 |
| 0.6984 | 2.0 | 2026 | 0.5605 | 0.7775 |
| 0.6106 | 3.0 | 3039 | 0.5315 | 0.7992 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Ridealist/distilbert-base-uncased-distilled-clinc
|
Ridealist
| 2023-10-09T10:25:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-09T02:50:17Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9419354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1002
- Accuracy: 0.9419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9029 | 1.0 | 318 | 0.5766 | 0.7216 |
| 0.4487 | 2.0 | 636 | 0.2855 | 0.8755 |
| 0.2535 | 3.0 | 954 | 0.1780 | 0.9287 |
| 0.1767 | 4.0 | 1272 | 0.1384 | 0.9319 |
| 0.142 | 5.0 | 1590 | 0.1212 | 0.9339 |
| 0.1245 | 6.0 | 1908 | 0.1115 | 0.9397 |
| 0.1143 | 7.0 | 2226 | 0.1058 | 0.9416 |
| 0.108 | 8.0 | 2544 | 0.1025 | 0.9423 |
| 0.1039 | 9.0 | 2862 | 0.1009 | 0.9423 |
| 0.102 | 10.0 | 3180 | 0.1002 | 0.9419 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.14.1
|
vishanki/my-pet-dog
|
vishanki
| 2023-10-09T10:15:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-09T10:02:58Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by vishanki following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
|
DarkCosmos/distilgpt2-finetuned-wikitext2
|
DarkCosmos
| 2023-10-09T10:05:41Z | 222 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-09T09:22:55Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7563 | 1.0 | 2334 | 3.6668 |
| 3.6417 | 2.0 | 4668 | 3.6485 |
| 3.5955 | 3.0 | 7002 | 3.6434 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.13.0
- Datasets 1.18.3
- Tokenizers 0.14.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.