modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yangwang825/ssast-audioset-librispeech-16-16
|
yangwang825
| 2023-08-20T13:53:05Z | 162 | 1 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"feature-extraction",
"audio-classification",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-20T10:15:46Z |
---
pipeline_tag: audio-classification
---
|
smcmurtrey/Nous-Hermes-Llama2-13b-oasst1
|
smcmurtrey
| 2023-08-20T13:46:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T10:34:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
alokedeep/xlm-roberta-base-finetuned-panx-de-fr
|
alokedeep
| 2023-08-20T13:41:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-20T13:25:55Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1603
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2865 | 1.0 | 715 | 0.1777 | 0.8240 |
| 0.1463 | 2.0 | 1430 | 0.1603 | 0.8420 |
| 0.0937 | 3.0 | 2145 | 0.1603 | 0.8595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
satanicmangoes/ppo-LunarLander-v2
|
satanicmangoes
| 2023-08-20T13:31:50Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T13:31:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.36 +/- 27.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Edmon02/distilbert-base-uncased-distilled-clinc
|
Edmon02
| 2023-08-20T13:19:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-20T13:05:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9480645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2931
- Accuracy: 0.9481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.7836 | 0.7290 |
| 2.1522 | 2.0 | 636 | 0.8985 | 0.8613 |
| 2.1522 | 3.0 | 954 | 0.5248 | 0.9165 |
| 0.813 | 4.0 | 1272 | 0.3889 | 0.9394 |
| 0.3827 | 5.0 | 1590 | 0.3362 | 0.9426 |
| 0.3827 | 6.0 | 1908 | 0.3144 | 0.9461 |
| 0.2719 | 7.0 | 2226 | 0.3053 | 0.9481 |
| 0.2367 | 8.0 | 2544 | 0.2967 | 0.9477 |
| 0.2367 | 9.0 | 2862 | 0.2948 | 0.9474 |
| 0.223 | 10.0 | 3180 | 0.2931 | 0.9481 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
VicBeltran/poca-SoccerTwos
|
VicBeltran
| 2023-08-20T13:15:40Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-20T01:00:20Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VicBeltran/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Maxph2211/dqn-SpaceInvadersNoFrameskip-v4
|
Maxph2211
| 2023-08-20T13:13:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T13:13:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Maxph2211 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Maxph2211 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Maxph2211
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
FZH1996/fed-lora
|
FZH1996
| 2023-08-20T13:00:16Z | 0 | 0 | null |
[
"arxiv:2106.09685",
"arxiv:1907.11692",
"arxiv:2006.03654",
"arxiv:1902.00751",
"arxiv:2101.00190",
"region:us"
] | null | 2023-08-17T08:16:23Z |
# LoRA: Low-Rank Adaptation of Large Language Models
*(For the radio communication technique, see [LoRa](https://lora-alliance.org/).)*
This repo contains the source code of the Python package `loralib` and several examples of how to integrate it with PyTorch models, such as those in HuggingFace.
We only support PyTorch for now.
See our paper for a detailed description of LoRA.
**LoRA: Low-Rank Adaptation of Large Language Models** <br>
*Edward J. Hu\*, Yelong Shen\*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen* <br>
Paper: https://arxiv.org/abs/2106.09685 <br>
*Update 2/2023: LoRA is now supported by the [State-of-the-art Parameter-Efficient Fine-Tuning (PEFT)](https://github.com/huggingface/peft) library by HuggingFace.*
LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights.
This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency.
LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.
We obtain result comparable or superior to full finetuning on the GLUE benchmark using [RoBERTa (Liu et al., 2019)](https://arxiv.org/abs/1907.11692) base and large and [DeBERTa (He et al., 2020)](https://arxiv.org/abs/2006.03654) XXL 1.5B, while only training and storing a fraction of the parameters. Click the numbers below to download the RoBERTa and DeBERTa LoRA checkpoints.
| | | RoBERTa base <br> Fine-tune | RoBERTa base <br> LoRA | DeBERTa XXL <br> Fine-tune | DeBERTa XXL <br> LoRA |
|---|-------------------------|----------------|--------------------------|-----------------|-----------------|
| | # of Trainable Params. | 125M | 0.8M | 1.5B | 4.7M |
| | MNLI (m-Acc/mm-Acc) | <b>87.6</b> | [<b>87.5</b>±.3/86.9±.3](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_mnli.bin) |91.7/<b>91.9</b>| [<b>91.9</b>±.1/<b>91.9</b>±.2](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_mnli.bin) |
| | SST2 (Acc) | 94.8 | [<b>95.1</b>±.2](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_sst2.bin) | <b>97.2</b> | [96.9±.2](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_sst2.bin) |
| | MRPC (Acc) | <b>90.2</b> | [<b>89.7</b>±.7](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_mrpc.bin) | 92.0 | [<b>92.6</b>±.6](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_mrpc.bin) |
| | CoLA (Matthew's Corr) | <b>63.6</b> | [<b>63.4</b>±1.2](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_cola.bin) | <b>72.0</b> | [<b>72.4</b>±1.1](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_cola.bin) |
| | QNLI (Acc) | 92.8 | [<b>93.3</b>±.3](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_qnli.bin) | <b>96.0</b> | [<b>96.0</b>±.1](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_qnli.bin) |
| | QQP (Acc) | <b>91.9</b> | [90.8±.1](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_qqp.bin) | 92.7 | [<b>92.9</b>±.1](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_qqp.bin) |
| | RTE (Acc) | 78.7 | [<b>86.6</b>±.7](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_rte.bin) | 93.9 | [<b>94.9</b>±.4](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_rte.bin) |
| | STSB (Pearson/Spearman Corr) | 91.2 | [<b>91.5</b>±.2/<b>91.3</b>±.2](https://github.com/microsoft/LoRA/releases/download/RoBERTa-base/roberta_base_lora_stsb.bin) |<b>92.9</b>/92.6| [<b>93.0</b>±.2/<b>92.9</b>±.3](https://github.com/microsoft/LoRA/releases/download/DeBERTa/deberta_v2_xxlarge_lora_stsb.bin) |
| | Average | 86.40 | <b>87.24</b> | 91.06 | <b>91.32</b> |
<i>Note: You still need the original pre-trained checkpoint from [HuggingFace](https://huggingface.co/) to use the LoRA checkpoints.</i>
Fine-tuning numbers are taken from [Liu et al. (2019)](https://arxiv.org/abs/1907.11692) and [He et al. (2020)](https://arxiv.org/abs/2006.03654). We include confidence intervals on results from our experiments. Please follow the instructions in `examples/NLU/` to reproduce our results.
On GPT-2, LoRA compares favorably to both full finetuning and other efficient tuning methods, such as [adapter (Houlsby et al., 2019)](https://arxiv.org/abs/1902.00751) and [prefix tuning (Li and Liang, 2021)](https://arxiv.org/abs/2101.00190). We evaluated on E2E NLG Challenge, DART, and WebNLG:
| | Method | # of Trainable Params | E2E (BLEU) | DART (BLEU) | WebNLG (BLEU-U/S/A) |
|---|---------------------|-----------------------|--------------|--------------|--------------------------------|
| | GPT-2 M (Fine-Tune) | 354.92M | 68.2 | 46.0 | 30.4/<b>63.2</b>/47.6 |
| | GPT-2 M (Adapter) | 0.37M | 66.3 | 42.4 | 45.1/54.5/50.2 |
| | GPT-2 M (Prefix) | 0.35M | 69.7 | 45.7 | 44.1/63.1/54.4 |
| | GPT-2 M (LoRA) | 0.35M |<b>70.4</b>±.1|<b>47.1</b>±.2| <b>46.7</b>±.4/62.1±.2/<b>55.3</b>±.2 |
| | GPT-2 L (Fine-Tune) | 774.03M | 68.5 | 46.5 | 41.7/<b>64.6</b>/54.2 |
| | GPT-2 L (Adapter) | 0.88M | 69.1±.1 | 45.7±.1 | <b>49.8</b>±.0/61.1±.0/56.0±.0 |
| | GPT-2 L (Prefix) | 0.77M | 70.3 | 46.5 | 47.0/64.2/56.4 |
| | GPT-2 L (LoRA) | 0.77M |<b>70.4</b>±.1|<b>47.5</b>±.1| 48.4±.3/<b>64.0</b>±.3/<b>57.0</b>±.1 |
Non-LoRA baselines, except for adapter on GPT-2 large, are taken from [Li and Liang (2021)](https://arxiv.org/abs/2101.00190). We include confidence intervals on results from our experiments.
Download the GPT-2 LoRA checkpoints:
* [GPT-2 Medium E2E](https://github.com/microsoft/LoRA/releases/download/GPT-2/gpt2_md_lora_e2e.pt) (1.5 MB)
* [GPT-2 Medium DART](https://github.com/microsoft/LoRA/releases/download/GPT-2/gpt2_md_lora_dart.pt) (1.5 MB)
* [GPT-2 Medium WebNLG](https://github.com/microsoft/LoRA/releases/download/GPT-2/gpt2_md_lora_webnlg.pt) (1.5 MB)
* [GPT-2 Large E2E](https://github.com/microsoft/LoRA/releases/download/GPT-2/gpt2_lg_lora_e2e.pt) (2.3 MB)
* [GPT-2 Large DART](https://github.com/microsoft/LoRA/releases/download/GPT-2/gpt2_lg_lora_dart.pt) (2.3 MB)
* [GPT-2 Large WebNLG](https://github.com/microsoft/LoRA/releases/download/GPT-2/gpt2_lg_lora_webnlg.pt) (2.3 MB)
Please follow the instructions in `examples/NLG/` to reproduce our result.
## Repository Overview
<i>(The initial release of this repo has been archived in the branch "snapshot-9-15-2021")</i>
There are several directories in this repo:
* [loralib/](loralib) contains the source code for the package `loralib`, which needs to be installed to run the examples we provide;
* [examples/NLG/](examples/NLG) contains an example implementation of LoRA in GPT-2 using our package, which can be used to reproduce the result in our paper;
* [examples/NLU/](examples/NLU) contains an example implementation of LoRA in RoBERTa and DeBERTa using our package, which produces competitive results on the GLUE benchmark;
* See how we use `loralib` in [GPT-2](examples/NLG/src/model.py), [RoBERTa](examples/NLU/src/transformers/models/roberta/modeling_roberta.py), and [DeBERTa v2](examples/NLU/src/transformers/models/deberta_v2/modeling_deberta_v2.py)
## Quickstart
1. Installing `loralib` is simply
```
pip install loralib
# Alternatively
# pip install git+https://github.com/microsoft/LoRA
```
2. You can choose to adapt some layers by replacing them with counterparts implemented in `loralib`. We only support `nn.Linear`, `nn.Embedding`, and `nn.Conv2d` for now. We also support a `MergedLinear` for cases where a single `nn.Linear` represents more than one layers, such as in some implementations of the attention `qkv` projection (see Additional Notes for more).
```
# ===== Before =====
# layer = nn.Linear(in_features, out_features)
# ===== After ======
import loralib as lora
# Add a pair of low-rank adaptation matrices with rank r=16
layer = lora.Linear(in_features, out_features, r=16)
```
3. Before the training loop begins, mark only LoRA parameters as trainable.
```
import loralib as lora
model = BigModel()
# This sets requires_grad to False for all parameters without the string "lora_" in their names
lora.mark_only_lora_as_trainable(model)
# Training loop
for batch in dataloader:
...
```
4. When saving a checkpoint, generate a `state_dict` that only contains LoRA parameters.
```
# ===== Before =====
# torch.save(model.state_dict(), checkpoint_path)
# ===== After =====
torch.save(lora.lora_state_dict(model), checkpoint_path)
```
5. When loading a checkpoint using `load_state_dict`, be sure to set `strict=False`.
```
# Load the pretrained checkpoint first
model.load_state_dict(torch.load('ckpt_pretrained.pt'), strict=False)
# Then load the LoRA checkpoint
model.load_state_dict(torch.load('ckpt_lora.pt'), strict=False)
```
#### Now training can proceed as usual.
## Additional Notes
1. While we focus on a simple yet effect setup, namely adapting only the `q` and `v` projection in a Transformer, in our examples, LoRA can be apply to any subsets of pre-trained weights. We encourage you to explore different configurations, such as adapting the embedding layer by replacing `nn.Embedding` with `lora.Embedding` and/or adapting the MLP layers. It's very likely that the optimal configuration varies for different model architectures and tasks.
2. Some Transformer implementation uses a single `nn.Linear` for the projection matrices for query, key, and value. If one wishes to constrain the rank of the updates to the individual matrices, one has to either break it up into three separate matrices or use `lora.MergedLinear`. Make sure to modify the checkpoint accordingly if you choose to break up the layer.
```
# ===== Before =====
# qkv_proj = nn.Linear(d_model, 3*d_model)
# ===== After =====
# Break it up (remember to modify the pretrained checkpoint accordingly)
q_proj = lora.Linear(d_model, d_model, r=8)
k_proj = nn.Linear(d_model, d_model)
v_proj = lora.Linear(d_model, d_model, r=8)
# Alternatively, use lora.MergedLinear (recommended)
qkv_proj = lora.MergedLinear(d_model, 3*d_model, r=8, enable_lora=[True, False, True])
```
3. Training bias vectors in tandem with LoRA might be a cost-efficient way to squeeze out extra task performance (if you tune the learning rate carefully). While we did not study its effect thoroughly in our paper, we make it easy to try in `lora`. You can mark some biases as trainable by passing "all" or "lora_only" to `bias=` when calling `mark_only_lora_as_trainable`. Remember to pass the corresponding `bias=` argument to `lora_state_dict` when saving a checkpoint.
```
# ===== Before =====
# lora.mark_only_lora_as_trainable(model) # Not training any bias vectors
# ===== After =====
# Training all bias vectors associated with modules we apply LoRA to
lora.mark_only_lora_as_trainable(model, bias='lora_only')
# Alternatively, we can train *all* bias vectors in the model, including LayerNorm biases
lora.mark_only_lora_as_trainable(model, bias='all')
# When saving a checkpoint, use the same bias= ('all' or 'lora_only')
torch.save(lora.lora_state_dict(model, bias='all'), checkpoint_path)
```
4. Calling `model.eval()` will trigger the merging of LoRA parameters with the corresponding pretrained ones, which eliminates additional latency for subsequent forward passes. Calling `model.train()` again will undo the merge. This can be disabled by passing `merge_weights=False` to LoRA layers.
## Contact
Please contact us or post an issue if you have any questions.
For questions related to the package `loralib`:
* Edward Hu (edward@edwardjhu.com)
* Phillip Wallis (phwallis@microsoft.com)
* Weizhu Chen (wzchen@microsoft.com)
The GPT-2 example:
* Phillip Wallis (phwallis@microsoft.com)
* Yelong Shen (yeshe@microsoft.com)
The RoBERTa/DeBERTa example:
* Lu Wang (luw@microsoft.com)
## Acknowledgements
We thank in alphabetical order Jianfeng Gao, Jade Huang, Jiayuan Huang, Lisa Xiang Li, Xiaodong Liu, Yabin Liu, Benjamin Van Durme, Luis Vargas, Haoran Wei, Peter Welinder, and Greg Yang for providing valuable feedback.
## Citation
```
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
|
nagupv/Stable13B_contextLLMExam_18kv2_15k3k_f0
|
nagupv
| 2023-08-20T12:29:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T12:28:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
nolanoAI/lordcoder-v0-14-9B
|
nolanoAI
| 2023-08-20T12:29:01Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"lordcoder",
"text-generation",
"custom_code",
"license:bigcode-openrail-m",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-18T09:52:27Z |
---
license: bigcode-openrail-m
---
## LoRDCoder v0 14.9B
Usage:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("nolanoAI/lordcoder-v0-14-9B", trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("nolanoAI/lordcoder-v0-14-9B", trust_remote_code=True)
inputs = {k: v.to(device) for k,v in tokenizer('# PyTorch CNN on MNIST\nimport torch\n', return_tensors='pt').items()}
generated_ids = model.generate(
**inputs,
use_cache=True,
max_new_tokens=500,
temperature=0.1,
top_p=0.95,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
)
```
|
nolanoAI/lordcoder-v0-13-2B
|
nolanoAI
| 2023-08-20T12:27:49Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"lordcoder",
"text-generation",
"custom_code",
"license:bigcode-openrail-m",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-18T09:19:26Z |
---
license: bigcode-openrail-m
---
## LoRDCoder v0 13.2B
Usage:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("nolanoAI/lordcoder-v0-13-2B", trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("nolanoAI/lordcoder-v0-13-2B", trust_remote_code=True)
inputs = {k: v.to(device) for k,v in tokenizer('# PyTorch CNN on MNIST\nimport torch\n', return_tensors='pt').items()}
generated_ids = model.generate(
**inputs,
use_cache=True,
max_new_tokens=500,
temperature=0.1,
top_p=0.95,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
)
```
|
nolanoAI/lordcoder-v0-12-6B
|
nolanoAI
| 2023-08-20T12:27:41Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"lordcoder",
"text-generation",
"custom_code",
"license:bigcode-openrail-m",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-18T09:06:18Z |
---
license: bigcode-openrail-m
---
## LoRDCoder v0 12.6B
Usage:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("nolanoAI/lordcoder-v0-12-6B", trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("nolanoAI/lordcoder-v0-12-6B", trust_remote_code=True)
inputs = {k: v.to(device) for k,v in tokenizer('# PyTorch CNN on MNIST\nimport torch\n', return_tensors='pt').items()}
generated_ids = model.generate(
**inputs,
use_cache=True,
max_new_tokens=500,
temperature=0.1,
top_p=0.95,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
)
```
|
edwardjjj/a2c-PandaReachDense-v3
|
edwardjjj
| 2023-08-20T12:23:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T12:21:06Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mkuntz/a2c-PandaReachDense-v2
|
mkuntz
| 2023-08-20T12:18:50Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-15T21:59:56Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.75 +/- 2.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
kawinduwijewardhane/text-summarization-AI
|
kawinduwijewardhane
| 2023-08-20T12:12:45Z | 0 | 0 |
transformers
|
[
"transformers",
"summarization",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-08-20T12:12:05Z |
---
library_name: transformers
pipeline_tag: summarization
---
|
GuillermoSC/Whisper_SM_EN_GS
|
GuillermoSC
| 2023-08-20T12:07:37Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:speechcolab/gigaspeech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-11T18:56:15Z |
---
language:
- en
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- speechcolab/gigaspeech
model-index:
- name: Whisper_SM_EN_GS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_SM_EN_GS
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the gigaspeech dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 40
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alexeynoskov/Reinforce-Pixelcopter-PLE-v0
|
alexeynoskov
| 2023-08-20T11:57:36Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T13:52:53Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 68.30 +/- 54.85
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rhythmsaparia/llama2_finetuned_chatbot
|
rhythmsaparia
| 2023-08-20T11:50:10Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-08-19T10:41:36Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2_finetuned_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_finetuned_chatbot
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jncraton/bge-small-en-ct2-int8
|
jncraton
| 2023-08-20T11:47:29Z | 13 | 0 |
transformers
|
[
"transformers",
"mteb",
"sentence transformers",
"sentence-similarity",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-20T11:37:33Z |
---
tags:
- mteb
- sentence transformers
model-index:
- name: bge-small-en
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.34328358208955
- type: ap
value: 37.59947775195661
- type: f1
value: 68.548415491933
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.04527499999999
- type: ap
value: 89.60696356772135
- type: f1
value: 93.03361469382438
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.08
- type: f1
value: 45.66249835363254
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.205999999999996
- type: map_at_10
value: 50.782000000000004
- type: map_at_100
value: 51.547
- type: map_at_1000
value: 51.554
- type: map_at_3
value: 46.515
- type: map_at_5
value: 49.296
- type: mrr_at_1
value: 35.632999999999996
- type: mrr_at_10
value: 50.958999999999996
- type: mrr_at_100
value: 51.724000000000004
- type: mrr_at_1000
value: 51.731
- type: mrr_at_3
value: 46.669
- type: mrr_at_5
value: 49.439
- type: ndcg_at_1
value: 35.205999999999996
- type: ndcg_at_10
value: 58.835
- type: ndcg_at_100
value: 62.095
- type: ndcg_at_1000
value: 62.255
- type: ndcg_at_3
value: 50.255
- type: ndcg_at_5
value: 55.296
- type: precision_at_1
value: 35.205999999999996
- type: precision_at_10
value: 8.421
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.365
- type: precision_at_5
value: 14.680000000000001
- type: recall_at_1
value: 35.205999999999996
- type: recall_at_10
value: 84.211
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.095
- type: recall_at_5
value: 73.4
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.52644476278646
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.973045724188964
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.28285314871488
- type: mrr
value: 74.52743701358659
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 80.09041909160327
- type: cos_sim_spearman
value: 79.96266537706944
- type: euclidean_pearson
value: 79.50774978162241
- type: euclidean_spearman
value: 79.9144715078551
- type: manhattan_pearson
value: 79.2062139879302
- type: manhattan_spearman
value: 79.35000081468212
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.31493506493506
- type: f1
value: 85.2704557977762
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.6837242810816
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.38881249555897
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.884999999999998
- type: map_at_10
value: 39.574
- type: map_at_100
value: 40.993
- type: map_at_1000
value: 41.129
- type: map_at_3
value: 36.089
- type: map_at_5
value: 38.191
- type: mrr_at_1
value: 34.477999999999994
- type: mrr_at_10
value: 45.411
- type: mrr_at_100
value: 46.089999999999996
- type: mrr_at_1000
value: 46.147
- type: mrr_at_3
value: 42.346000000000004
- type: mrr_at_5
value: 44.292
- type: ndcg_at_1
value: 34.477999999999994
- type: ndcg_at_10
value: 46.123999999999995
- type: ndcg_at_100
value: 51.349999999999994
- type: ndcg_at_1000
value: 53.578
- type: ndcg_at_3
value: 40.824
- type: ndcg_at_5
value: 43.571
- type: precision_at_1
value: 34.477999999999994
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 19.742
- type: precision_at_5
value: 14.421000000000001
- type: recall_at_1
value: 27.884999999999998
- type: recall_at_10
value: 59.087
- type: recall_at_100
value: 80.609
- type: recall_at_1000
value: 95.054
- type: recall_at_3
value: 44.082
- type: recall_at_5
value: 51.593999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.639
- type: map_at_10
value: 40.047
- type: map_at_100
value: 41.302
- type: map_at_1000
value: 41.425
- type: map_at_3
value: 37.406
- type: map_at_5
value: 38.934000000000005
- type: mrr_at_1
value: 37.707
- type: mrr_at_10
value: 46.082
- type: mrr_at_100
value: 46.745
- type: mrr_at_1000
value: 46.786
- type: mrr_at_3
value: 43.980999999999995
- type: mrr_at_5
value: 45.287
- type: ndcg_at_1
value: 37.707
- type: ndcg_at_10
value: 45.525
- type: ndcg_at_100
value: 49.976
- type: ndcg_at_1000
value: 51.94499999999999
- type: ndcg_at_3
value: 41.704
- type: ndcg_at_5
value: 43.596000000000004
- type: precision_at_1
value: 37.707
- type: precision_at_10
value: 8.465
- type: precision_at_100
value: 1.375
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 19.979
- type: precision_at_5
value: 14.115
- type: recall_at_1
value: 30.639
- type: recall_at_10
value: 54.775
- type: recall_at_100
value: 73.678
- type: recall_at_1000
value: 86.142
- type: recall_at_3
value: 43.230000000000004
- type: recall_at_5
value: 48.622
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.038
- type: map_at_10
value: 49.922
- type: map_at_100
value: 51.032
- type: map_at_1000
value: 51.085
- type: map_at_3
value: 46.664
- type: map_at_5
value: 48.588
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 53.566
- type: mrr_at_100
value: 54.318999999999996
- type: mrr_at_1000
value: 54.348
- type: mrr_at_3
value: 51.066
- type: mrr_at_5
value: 52.649
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 55.676
- type: ndcg_at_100
value: 60.126000000000005
- type: ndcg_at_1000
value: 61.208
- type: ndcg_at_3
value: 50.20400000000001
- type: ndcg_at_5
value: 53.038
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 8.953
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.256999999999998
- type: precision_at_5
value: 15.524
- type: recall_at_1
value: 38.038
- type: recall_at_10
value: 69.15
- type: recall_at_100
value: 88.31599999999999
- type: recall_at_1000
value: 95.993
- type: recall_at_3
value: 54.663
- type: recall_at_5
value: 61.373
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.872
- type: map_at_10
value: 32.912
- type: map_at_100
value: 33.972
- type: map_at_1000
value: 34.046
- type: map_at_3
value: 30.361
- type: map_at_5
value: 31.704
- type: mrr_at_1
value: 26.779999999999998
- type: mrr_at_10
value: 34.812
- type: mrr_at_100
value: 35.754999999999995
- type: mrr_at_1000
value: 35.809000000000005
- type: mrr_at_3
value: 32.335
- type: mrr_at_5
value: 33.64
- type: ndcg_at_1
value: 26.779999999999998
- type: ndcg_at_10
value: 37.623
- type: ndcg_at_100
value: 42.924
- type: ndcg_at_1000
value: 44.856
- type: ndcg_at_3
value: 32.574
- type: ndcg_at_5
value: 34.842
- type: precision_at_1
value: 26.779999999999998
- type: precision_at_10
value: 5.729
- type: precision_at_100
value: 0.886
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.559
- type: precision_at_5
value: 9.469
- type: recall_at_1
value: 24.872
- type: recall_at_10
value: 50.400999999999996
- type: recall_at_100
value: 74.954
- type: recall_at_1000
value: 89.56
- type: recall_at_3
value: 36.726
- type: recall_at_5
value: 42.138999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.803
- type: map_at_10
value: 24.348
- type: map_at_100
value: 25.56
- type: map_at_1000
value: 25.668000000000003
- type: map_at_3
value: 21.811
- type: map_at_5
value: 23.287
- type: mrr_at_1
value: 20.771
- type: mrr_at_10
value: 28.961
- type: mrr_at_100
value: 29.979
- type: mrr_at_1000
value: 30.046
- type: mrr_at_3
value: 26.555
- type: mrr_at_5
value: 28.060000000000002
- type: ndcg_at_1
value: 20.771
- type: ndcg_at_10
value: 29.335
- type: ndcg_at_100
value: 35.188
- type: ndcg_at_1000
value: 37.812
- type: ndcg_at_3
value: 24.83
- type: ndcg_at_5
value: 27.119
- type: precision_at_1
value: 20.771
- type: precision_at_10
value: 5.4350000000000005
- type: precision_at_100
value: 0.9480000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.982
- type: precision_at_5
value: 8.831
- type: recall_at_1
value: 16.803
- type: recall_at_10
value: 40.039
- type: recall_at_100
value: 65.83200000000001
- type: recall_at_1000
value: 84.478
- type: recall_at_3
value: 27.682000000000002
- type: recall_at_5
value: 33.535
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.345
- type: map_at_10
value: 37.757000000000005
- type: map_at_100
value: 39.141
- type: map_at_1000
value: 39.262
- type: map_at_3
value: 35.183
- type: map_at_5
value: 36.592
- type: mrr_at_1
value: 34.649
- type: mrr_at_10
value: 43.586999999999996
- type: mrr_at_100
value: 44.481
- type: mrr_at_1000
value: 44.542
- type: mrr_at_3
value: 41.29
- type: mrr_at_5
value: 42.642
- type: ndcg_at_1
value: 34.649
- type: ndcg_at_10
value: 43.161
- type: ndcg_at_100
value: 48.734
- type: ndcg_at_1000
value: 51.046
- type: ndcg_at_3
value: 39.118
- type: ndcg_at_5
value: 41.022
- type: precision_at_1
value: 34.649
- type: precision_at_10
value: 7.603
- type: precision_at_100
value: 1.209
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 18.319
- type: precision_at_5
value: 12.839
- type: recall_at_1
value: 28.345
- type: recall_at_10
value: 53.367
- type: recall_at_100
value: 76.453
- type: recall_at_1000
value: 91.82000000000001
- type: recall_at_3
value: 41.636
- type: recall_at_5
value: 46.760000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.419
- type: map_at_10
value: 31.716
- type: map_at_100
value: 33.152
- type: map_at_1000
value: 33.267
- type: map_at_3
value: 28.74
- type: map_at_5
value: 30.48
- type: mrr_at_1
value: 28.310999999999996
- type: mrr_at_10
value: 37.039
- type: mrr_at_100
value: 38.09
- type: mrr_at_1000
value: 38.145
- type: mrr_at_3
value: 34.437
- type: mrr_at_5
value: 36.024
- type: ndcg_at_1
value: 28.310999999999996
- type: ndcg_at_10
value: 37.41
- type: ndcg_at_100
value: 43.647999999999996
- type: ndcg_at_1000
value: 46.007
- type: ndcg_at_3
value: 32.509
- type: ndcg_at_5
value: 34.943999999999996
- type: precision_at_1
value: 28.310999999999996
- type: precision_at_10
value: 6.963
- type: precision_at_100
value: 1.1860000000000002
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 15.867999999999999
- type: precision_at_5
value: 11.507000000000001
- type: recall_at_1
value: 22.419
- type: recall_at_10
value: 49.28
- type: recall_at_100
value: 75.802
- type: recall_at_1000
value: 92.032
- type: recall_at_3
value: 35.399
- type: recall_at_5
value: 42.027
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.669249999999998
- type: map_at_10
value: 33.332583333333325
- type: map_at_100
value: 34.557833333333335
- type: map_at_1000
value: 34.67141666666666
- type: map_at_3
value: 30.663166666666662
- type: map_at_5
value: 32.14883333333333
- type: mrr_at_1
value: 29.193833333333334
- type: mrr_at_10
value: 37.47625
- type: mrr_at_100
value: 38.3545
- type: mrr_at_1000
value: 38.413166666666676
- type: mrr_at_3
value: 35.06741666666667
- type: mrr_at_5
value: 36.450666666666656
- type: ndcg_at_1
value: 29.193833333333334
- type: ndcg_at_10
value: 38.505416666666676
- type: ndcg_at_100
value: 43.81125
- type: ndcg_at_1000
value: 46.09558333333333
- type: ndcg_at_3
value: 33.90916666666667
- type: ndcg_at_5
value: 36.07666666666666
- type: precision_at_1
value: 29.193833333333334
- type: precision_at_10
value: 6.7251666666666665
- type: precision_at_100
value: 1.1058333333333332
- type: precision_at_1000
value: 0.14833333333333332
- type: precision_at_3
value: 15.554166666666665
- type: precision_at_5
value: 11.079250000000002
- type: recall_at_1
value: 24.669249999999998
- type: recall_at_10
value: 49.75583333333332
- type: recall_at_100
value: 73.06908333333332
- type: recall_at_1000
value: 88.91316666666667
- type: recall_at_3
value: 36.913250000000005
- type: recall_at_5
value: 42.48641666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.044999999999998
- type: map_at_10
value: 30.349999999999998
- type: map_at_100
value: 31.273
- type: map_at_1000
value: 31.362000000000002
- type: map_at_3
value: 28.508
- type: map_at_5
value: 29.369
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.12
- type: mrr_at_100
value: 33.904
- type: mrr_at_1000
value: 33.967000000000006
- type: mrr_at_3
value: 31.365
- type: mrr_at_5
value: 32.124
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.214
- type: ndcg_at_100
value: 38.681
- type: ndcg_at_1000
value: 40.926
- type: ndcg_at_3
value: 30.725
- type: ndcg_at_5
value: 31.967000000000002
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.215
- type: precision_at_100
value: 0.807
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 8.712
- type: recall_at_1
value: 24.044999999999998
- type: recall_at_10
value: 43.456
- type: recall_at_100
value: 63.675000000000004
- type: recall_at_1000
value: 80.05499999999999
- type: recall_at_3
value: 33.561
- type: recall_at_5
value: 36.767
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.672
- type: map_at_10
value: 22.641
- type: map_at_100
value: 23.75
- type: map_at_1000
value: 23.877000000000002
- type: map_at_3
value: 20.219
- type: map_at_5
value: 21.648
- type: mrr_at_1
value: 18.823
- type: mrr_at_10
value: 26.101999999999997
- type: mrr_at_100
value: 27.038
- type: mrr_at_1000
value: 27.118
- type: mrr_at_3
value: 23.669
- type: mrr_at_5
value: 25.173000000000002
- type: ndcg_at_1
value: 18.823
- type: ndcg_at_10
value: 27.176000000000002
- type: ndcg_at_100
value: 32.42
- type: ndcg_at_1000
value: 35.413
- type: ndcg_at_3
value: 22.756999999999998
- type: ndcg_at_5
value: 25.032
- type: precision_at_1
value: 18.823
- type: precision_at_10
value: 5.034000000000001
- type: precision_at_100
value: 0.895
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 10.771
- type: precision_at_5
value: 8.1
- type: recall_at_1
value: 15.672
- type: recall_at_10
value: 37.296
- type: recall_at_100
value: 60.863
- type: recall_at_1000
value: 82.234
- type: recall_at_3
value: 25.330000000000002
- type: recall_at_5
value: 30.964000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.633
- type: map_at_10
value: 32.858
- type: map_at_100
value: 34.038000000000004
- type: map_at_1000
value: 34.141
- type: map_at_3
value: 30.209000000000003
- type: map_at_5
value: 31.567
- type: mrr_at_1
value: 28.358
- type: mrr_at_10
value: 36.433
- type: mrr_at_100
value: 37.352000000000004
- type: mrr_at_1000
value: 37.41
- type: mrr_at_3
value: 34.033
- type: mrr_at_5
value: 35.246
- type: ndcg_at_1
value: 28.358
- type: ndcg_at_10
value: 37.973
- type: ndcg_at_100
value: 43.411
- type: ndcg_at_1000
value: 45.747
- type: ndcg_at_3
value: 32.934999999999995
- type: ndcg_at_5
value: 35.013
- type: precision_at_1
value: 28.358
- type: precision_at_10
value: 6.418
- type: precision_at_100
value: 1.02
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.677000000000001
- type: precision_at_5
value: 10.335999999999999
- type: recall_at_1
value: 24.633
- type: recall_at_10
value: 50.048
- type: recall_at_100
value: 73.821
- type: recall_at_1000
value: 90.046
- type: recall_at_3
value: 36.284
- type: recall_at_5
value: 41.370000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.133
- type: map_at_10
value: 31.491999999999997
- type: map_at_100
value: 33.062000000000005
- type: map_at_1000
value: 33.256
- type: map_at_3
value: 28.886
- type: map_at_5
value: 30.262
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.144
- type: mrr_at_100
value: 37.14
- type: mrr_at_1000
value: 37.191
- type: mrr_at_3
value: 33.762
- type: mrr_at_5
value: 34.997
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 36.951
- type: ndcg_at_100
value: 43.287
- type: ndcg_at_1000
value: 45.777
- type: ndcg_at_3
value: 32.786
- type: ndcg_at_5
value: 34.65
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.22899999999999998
- type: precision_at_3
value: 15.481
- type: precision_at_5
value: 11.186
- type: recall_at_1
value: 23.133
- type: recall_at_10
value: 47.285
- type: recall_at_100
value: 76.176
- type: recall_at_1000
value: 92.176
- type: recall_at_3
value: 35.223
- type: recall_at_5
value: 40.142
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.547
- type: map_at_10
value: 26.374
- type: map_at_100
value: 27.419
- type: map_at_1000
value: 27.539
- type: map_at_3
value: 23.882
- type: map_at_5
value: 25.163999999999998
- type: mrr_at_1
value: 21.442
- type: mrr_at_10
value: 28.458
- type: mrr_at_100
value: 29.360999999999997
- type: mrr_at_1000
value: 29.448999999999998
- type: mrr_at_3
value: 25.97
- type: mrr_at_5
value: 27.273999999999997
- type: ndcg_at_1
value: 21.442
- type: ndcg_at_10
value: 30.897000000000002
- type: ndcg_at_100
value: 35.99
- type: ndcg_at_1000
value: 38.832
- type: ndcg_at_3
value: 25.944
- type: ndcg_at_5
value: 28.126
- type: precision_at_1
value: 21.442
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8109999999999999
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.911
- type: recall_at_1
value: 19.547
- type: recall_at_10
value: 42.886
- type: recall_at_100
value: 66.64999999999999
- type: recall_at_1000
value: 87.368
- type: recall_at_3
value: 29.143
- type: recall_at_5
value: 34.544000000000004
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.572
- type: map_at_10
value: 25.312
- type: map_at_100
value: 27.062
- type: map_at_1000
value: 27.253
- type: map_at_3
value: 21.601
- type: map_at_5
value: 23.473
- type: mrr_at_1
value: 34.984
- type: mrr_at_10
value: 46.406
- type: mrr_at_100
value: 47.179
- type: mrr_at_1000
value: 47.21
- type: mrr_at_3
value: 43.485
- type: mrr_at_5
value: 45.322
- type: ndcg_at_1
value: 34.984
- type: ndcg_at_10
value: 34.344
- type: ndcg_at_100
value: 41.015
- type: ndcg_at_1000
value: 44.366
- type: ndcg_at_3
value: 29.119
- type: ndcg_at_5
value: 30.825999999999997
- type: precision_at_1
value: 34.984
- type: precision_at_10
value: 10.358
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 21.368000000000002
- type: precision_at_5
value: 15.948
- type: recall_at_1
value: 15.572
- type: recall_at_10
value: 39.367999999999995
- type: recall_at_100
value: 62.183
- type: recall_at_1000
value: 80.92200000000001
- type: recall_at_3
value: 26.131999999999998
- type: recall_at_5
value: 31.635999999999996
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.848
- type: map_at_10
value: 19.25
- type: map_at_100
value: 27.193
- type: map_at_1000
value: 28.721999999999998
- type: map_at_3
value: 13.968
- type: map_at_5
value: 16.283
- type: mrr_at_1
value: 68.75
- type: mrr_at_10
value: 76.25
- type: mrr_at_100
value: 76.534
- type: mrr_at_1000
value: 76.53999999999999
- type: mrr_at_3
value: 74.667
- type: mrr_at_5
value: 75.86699999999999
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 41.426
- type: ndcg_at_100
value: 45.660000000000004
- type: ndcg_at_1000
value: 53.02
- type: ndcg_at_3
value: 46.581
- type: ndcg_at_5
value: 43.836999999999996
- type: precision_at_1
value: 68.75
- type: precision_at_10
value: 32.800000000000004
- type: precision_at_100
value: 10.440000000000001
- type: precision_at_1000
value: 1.9980000000000002
- type: precision_at_3
value: 49.667
- type: precision_at_5
value: 42.25
- type: recall_at_1
value: 8.848
- type: recall_at_10
value: 24.467
- type: recall_at_100
value: 51.344
- type: recall_at_1000
value: 75.235
- type: recall_at_3
value: 15.329
- type: recall_at_5
value: 18.892999999999997
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.95
- type: f1
value: 43.44563593360779
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 78.036
- type: map_at_10
value: 85.639
- type: map_at_100
value: 85.815
- type: map_at_1000
value: 85.829
- type: map_at_3
value: 84.795
- type: map_at_5
value: 85.336
- type: mrr_at_1
value: 84.353
- type: mrr_at_10
value: 90.582
- type: mrr_at_100
value: 90.617
- type: mrr_at_1000
value: 90.617
- type: mrr_at_3
value: 90.132
- type: mrr_at_5
value: 90.447
- type: ndcg_at_1
value: 84.353
- type: ndcg_at_10
value: 89.003
- type: ndcg_at_100
value: 89.60000000000001
- type: ndcg_at_1000
value: 89.836
- type: ndcg_at_3
value: 87.81400000000001
- type: ndcg_at_5
value: 88.478
- type: precision_at_1
value: 84.353
- type: precision_at_10
value: 10.482
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.257999999999996
- type: precision_at_5
value: 20.465
- type: recall_at_1
value: 78.036
- type: recall_at_10
value: 94.517
- type: recall_at_100
value: 96.828
- type: recall_at_1000
value: 98.261
- type: recall_at_3
value: 91.12
- type: recall_at_5
value: 92.946
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.191
- type: map_at_10
value: 32.369
- type: map_at_100
value: 34.123999999999995
- type: map_at_1000
value: 34.317
- type: map_at_3
value: 28.71
- type: map_at_5
value: 30.607
- type: mrr_at_1
value: 40.894999999999996
- type: mrr_at_10
value: 48.842
- type: mrr_at_100
value: 49.599
- type: mrr_at_1000
value: 49.647000000000006
- type: mrr_at_3
value: 46.785
- type: mrr_at_5
value: 47.672
- type: ndcg_at_1
value: 40.894999999999996
- type: ndcg_at_10
value: 39.872
- type: ndcg_at_100
value: 46.126
- type: ndcg_at_1000
value: 49.476
- type: ndcg_at_3
value: 37.153000000000006
- type: ndcg_at_5
value: 37.433
- type: precision_at_1
value: 40.894999999999996
- type: precision_at_10
value: 10.818
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 25.051000000000002
- type: precision_at_5
value: 17.531
- type: recall_at_1
value: 20.191
- type: recall_at_10
value: 45.768
- type: recall_at_100
value: 68.82000000000001
- type: recall_at_1000
value: 89.133
- type: recall_at_3
value: 33.296
- type: recall_at_5
value: 38.022
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.257
- type: map_at_10
value: 61.467000000000006
- type: map_at_100
value: 62.364
- type: map_at_1000
value: 62.424
- type: map_at_3
value: 58.228
- type: map_at_5
value: 60.283
- type: mrr_at_1
value: 78.515
- type: mrr_at_10
value: 84.191
- type: mrr_at_100
value: 84.378
- type: mrr_at_1000
value: 84.385
- type: mrr_at_3
value: 83.284
- type: mrr_at_5
value: 83.856
- type: ndcg_at_1
value: 78.515
- type: ndcg_at_10
value: 69.78999999999999
- type: ndcg_at_100
value: 72.886
- type: ndcg_at_1000
value: 74.015
- type: ndcg_at_3
value: 65.23
- type: ndcg_at_5
value: 67.80199999999999
- type: precision_at_1
value: 78.515
- type: precision_at_10
value: 14.519000000000002
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.702
- type: precision_at_5
value: 27.046999999999997
- type: recall_at_1
value: 39.257
- type: recall_at_10
value: 72.59299999999999
- type: recall_at_100
value: 84.679
- type: recall_at_1000
value: 92.12
- type: recall_at_3
value: 62.552
- type: recall_at_5
value: 67.616
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.5152
- type: ap
value: 87.64584669595709
- type: f1
value: 91.50605576428437
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.926000000000002
- type: map_at_10
value: 34.049
- type: map_at_100
value: 35.213
- type: map_at_1000
value: 35.265
- type: map_at_3
value: 30.309
- type: map_at_5
value: 32.407000000000004
- type: mrr_at_1
value: 22.55
- type: mrr_at_10
value: 34.657
- type: mrr_at_100
value: 35.760999999999996
- type: mrr_at_1000
value: 35.807
- type: mrr_at_3
value: 30.989
- type: mrr_at_5
value: 33.039
- type: ndcg_at_1
value: 22.55
- type: ndcg_at_10
value: 40.842
- type: ndcg_at_100
value: 46.436
- type: ndcg_at_1000
value: 47.721999999999994
- type: ndcg_at_3
value: 33.209
- type: ndcg_at_5
value: 36.943
- type: precision_at_1
value: 22.55
- type: precision_at_10
value: 6.447
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.136000000000001
- type: precision_at_5
value: 10.381
- type: recall_at_1
value: 21.926000000000002
- type: recall_at_10
value: 61.724999999999994
- type: recall_at_100
value: 87.604
- type: recall_at_1000
value: 97.421
- type: recall_at_3
value: 40.944
- type: recall_at_5
value: 49.915
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.54765161878704
- type: f1
value: 93.3298945415573
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.71591427268582
- type: f1
value: 59.32113870474471
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.83053127101547
- type: f1
value: 73.60757944876475
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.72562205783457
- type: f1
value: 78.63761662505502
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.37935633767996
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.55270546130387
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.462692753143834
- type: mrr
value: 31.497569753511563
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.646
- type: map_at_10
value: 12.498
- type: map_at_100
value: 15.486
- type: map_at_1000
value: 16.805999999999997
- type: map_at_3
value: 9.325
- type: map_at_5
value: 10.751
- type: mrr_at_1
value: 43.034
- type: mrr_at_10
value: 52.662
- type: mrr_at_100
value: 53.189
- type: mrr_at_1000
value: 53.25
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 51.92
- type: ndcg_at_1
value: 41.796
- type: ndcg_at_10
value: 33.477000000000004
- type: ndcg_at_100
value: 29.996000000000002
- type: ndcg_at_1000
value: 38.864
- type: ndcg_at_3
value: 38.940000000000005
- type: ndcg_at_5
value: 36.689
- type: precision_at_1
value: 43.034
- type: precision_at_10
value: 24.799
- type: precision_at_100
value: 7.432999999999999
- type: precision_at_1000
value: 1.9929999999999999
- type: precision_at_3
value: 36.842000000000006
- type: precision_at_5
value: 32.135999999999996
- type: recall_at_1
value: 5.646
- type: recall_at_10
value: 15.963
- type: recall_at_100
value: 29.492
- type: recall_at_1000
value: 61.711000000000006
- type: recall_at_3
value: 10.585
- type: recall_at_5
value: 12.753999999999998
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.602
- type: map_at_10
value: 41.545
- type: map_at_100
value: 42.644999999999996
- type: map_at_1000
value: 42.685
- type: map_at_3
value: 37.261
- type: map_at_5
value: 39.706
- type: mrr_at_1
value: 31.141000000000002
- type: mrr_at_10
value: 44.139
- type: mrr_at_100
value: 44.997
- type: mrr_at_1000
value: 45.025999999999996
- type: mrr_at_3
value: 40.503
- type: mrr_at_5
value: 42.64
- type: ndcg_at_1
value: 31.141000000000002
- type: ndcg_at_10
value: 48.995
- type: ndcg_at_100
value: 53.788000000000004
- type: ndcg_at_1000
value: 54.730000000000004
- type: ndcg_at_3
value: 40.844
- type: ndcg_at_5
value: 44.955
- type: precision_at_1
value: 31.141000000000002
- type: precision_at_10
value: 8.233
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 18.579
- type: precision_at_5
value: 13.533999999999999
- type: recall_at_1
value: 27.602
- type: recall_at_10
value: 69.216
- type: recall_at_100
value: 90.252
- type: recall_at_1000
value: 97.27
- type: recall_at_3
value: 47.987
- type: recall_at_5
value: 57.438
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.949
- type: map_at_10
value: 84.89999999999999
- type: map_at_100
value: 85.531
- type: map_at_1000
value: 85.548
- type: map_at_3
value: 82.027
- type: map_at_5
value: 83.853
- type: mrr_at_1
value: 81.69999999999999
- type: mrr_at_10
value: 87.813
- type: mrr_at_100
value: 87.917
- type: mrr_at_1000
value: 87.91799999999999
- type: mrr_at_3
value: 86.938
- type: mrr_at_5
value: 87.53999999999999
- type: ndcg_at_1
value: 81.75
- type: ndcg_at_10
value: 88.55499999999999
- type: ndcg_at_100
value: 89.765
- type: ndcg_at_1000
value: 89.871
- type: ndcg_at_3
value: 85.905
- type: ndcg_at_5
value: 87.41
- type: precision_at_1
value: 81.75
- type: precision_at_10
value: 13.403
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.597
- type: precision_at_5
value: 24.69
- type: recall_at_1
value: 70.949
- type: recall_at_10
value: 95.423
- type: recall_at_100
value: 99.509
- type: recall_at_1000
value: 99.982
- type: recall_at_3
value: 87.717
- type: recall_at_5
value: 92.032
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 51.76962893449579
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.32897690686379
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.478
- type: map_at_10
value: 11.994
- type: map_at_100
value: 13.977
- type: map_at_1000
value: 14.295
- type: map_at_3
value: 8.408999999999999
- type: map_at_5
value: 10.024
- type: mrr_at_1
value: 22.1
- type: mrr_at_10
value: 33.526
- type: mrr_at_100
value: 34.577000000000005
- type: mrr_at_1000
value: 34.632000000000005
- type: mrr_at_3
value: 30.217
- type: mrr_at_5
value: 31.962000000000003
- type: ndcg_at_1
value: 22.1
- type: ndcg_at_10
value: 20.191
- type: ndcg_at_100
value: 27.954
- type: ndcg_at_1000
value: 33.491
- type: ndcg_at_3
value: 18.787000000000003
- type: ndcg_at_5
value: 16.378999999999998
- type: precision_at_1
value: 22.1
- type: precision_at_10
value: 10.69
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.35200000000000004
- type: precision_at_3
value: 17.732999999999997
- type: precision_at_5
value: 14.499999999999998
- type: recall_at_1
value: 4.478
- type: recall_at_10
value: 21.657
- type: recall_at_100
value: 44.54
- type: recall_at_1000
value: 71.542
- type: recall_at_3
value: 10.778
- type: recall_at_5
value: 14.687
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.82325259156718
- type: cos_sim_spearman
value: 79.2463589100662
- type: euclidean_pearson
value: 80.48318380496771
- type: euclidean_spearman
value: 79.34451935199979
- type: manhattan_pearson
value: 80.39041824178759
- type: manhattan_spearman
value: 79.23002892700211
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.74130231431258
- type: cos_sim_spearman
value: 78.36856568042397
- type: euclidean_pearson
value: 82.48301631890303
- type: euclidean_spearman
value: 78.28376980722732
- type: manhattan_pearson
value: 82.43552075450525
- type: manhattan_spearman
value: 78.22702443947126
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.96138619461459
- type: cos_sim_spearman
value: 81.85436343502379
- type: euclidean_pearson
value: 81.82895226665367
- type: euclidean_spearman
value: 82.22707349602916
- type: manhattan_pearson
value: 81.66303369445873
- type: manhattan_spearman
value: 82.05030197179455
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.05481244198648
- type: cos_sim_spearman
value: 80.85052504637808
- type: euclidean_pearson
value: 80.86728419744497
- type: euclidean_spearman
value: 81.033786401512
- type: manhattan_pearson
value: 80.90107531061103
- type: manhattan_spearman
value: 81.11374116827795
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.615220756399
- type: cos_sim_spearman
value: 86.46858500002092
- type: euclidean_pearson
value: 86.08307800247586
- type: euclidean_spearman
value: 86.72691443870013
- type: manhattan_pearson
value: 85.96155594487269
- type: manhattan_spearman
value: 86.605909505275
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.14363913634436
- type: cos_sim_spearman
value: 84.48430226487102
- type: euclidean_pearson
value: 83.75303424801902
- type: euclidean_spearman
value: 84.56762380734538
- type: manhattan_pearson
value: 83.6135447165928
- type: manhattan_spearman
value: 84.39898212616731
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.09909252554525
- type: cos_sim_spearman
value: 85.70951402743276
- type: euclidean_pearson
value: 87.1991936239908
- type: euclidean_spearman
value: 86.07745840612071
- type: manhattan_pearson
value: 87.25039137549952
- type: manhattan_spearman
value: 85.99938746659761
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.529332093413615
- type: cos_sim_spearman
value: 65.38177340147439
- type: euclidean_pearson
value: 66.35278011412136
- type: euclidean_spearman
value: 65.47147267032997
- type: manhattan_pearson
value: 66.71804682408693
- type: manhattan_spearman
value: 65.67406521423597
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.45802942885662
- type: cos_sim_spearman
value: 84.8853341842566
- type: euclidean_pearson
value: 84.60915021096707
- type: euclidean_spearman
value: 85.11181242913666
- type: manhattan_pearson
value: 84.38600521210364
- type: manhattan_spearman
value: 84.89045417981723
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.92793380635129
- type: mrr
value: 95.85834191226348
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.74400000000001
- type: map_at_10
value: 65.455
- type: map_at_100
value: 66.106
- type: map_at_1000
value: 66.129
- type: map_at_3
value: 62.719
- type: map_at_5
value: 64.441
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 66.776
- type: mrr_at_100
value: 67.363
- type: mrr_at_1000
value: 67.384
- type: mrr_at_3
value: 64.889
- type: mrr_at_5
value: 66.122
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 69.904
- type: ndcg_at_100
value: 72.807
- type: ndcg_at_1000
value: 73.423
- type: ndcg_at_3
value: 65.405
- type: ndcg_at_5
value: 67.86999999999999
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 17
- type: recall_at_1
value: 55.74400000000001
- type: recall_at_10
value: 82.122
- type: recall_at_100
value: 95.167
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 70.14399999999999
- type: recall_at_5
value: 76.417
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86534653465347
- type: cos_sim_ap
value: 96.54142419791388
- type: cos_sim_f1
value: 93.07535641547861
- type: cos_sim_precision
value: 94.81327800829875
- type: cos_sim_recall
value: 91.4
- type: dot_accuracy
value: 99.86435643564356
- type: dot_ap
value: 96.53682260449868
- type: dot_f1
value: 92.98515104966718
- type: dot_precision
value: 95.27806925498426
- type: dot_recall
value: 90.8
- type: euclidean_accuracy
value: 99.86336633663366
- type: euclidean_ap
value: 96.5228676185697
- type: euclidean_f1
value: 92.9735234215886
- type: euclidean_precision
value: 94.70954356846472
- type: euclidean_recall
value: 91.3
- type: manhattan_accuracy
value: 99.85841584158416
- type: manhattan_ap
value: 96.50392760934032
- type: manhattan_f1
value: 92.84642321160581
- type: manhattan_precision
value: 92.8928928928929
- type: manhattan_recall
value: 92.80000000000001
- type: max_accuracy
value: 99.86534653465347
- type: max_ap
value: 96.54142419791388
- type: max_f1
value: 93.07535641547861
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 61.08285408766616
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.640675309010604
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.20333913710715
- type: mrr
value: 54.088813555725324
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.79465221925075
- type: cos_sim_spearman
value: 30.530816059163634
- type: dot_pearson
value: 31.364837244718043
- type: dot_spearman
value: 30.79726823684003
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.735
- type: map_at_100
value: 8.978
- type: map_at_1000
value: 20.851
- type: map_at_3
value: 0.613
- type: map_at_5
value: 0.964
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 92.867
- type: mrr_at_100
value: 92.867
- type: mrr_at_1000
value: 92.867
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 92.667
- type: ndcg_at_1
value: 82
- type: ndcg_at_10
value: 73.164
- type: ndcg_at_100
value: 51.878
- type: ndcg_at_1000
value: 44.864
- type: ndcg_at_3
value: 79.184
- type: ndcg_at_5
value: 76.39
- type: precision_at_1
value: 88
- type: precision_at_10
value: 76.2
- type: precision_at_100
value: 52.459999999999994
- type: precision_at_1000
value: 19.692
- type: precision_at_3
value: 82.667
- type: precision_at_5
value: 80
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 1.942
- type: recall_at_100
value: 12.342
- type: recall_at_1000
value: 41.42
- type: recall_at_3
value: 0.637
- type: recall_at_5
value: 1.034
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.567
- type: map_at_10
value: 13.116
- type: map_at_100
value: 19.39
- type: map_at_1000
value: 20.988
- type: map_at_3
value: 7.109
- type: map_at_5
value: 9.950000000000001
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 57.404999999999994
- type: mrr_at_100
value: 58.021
- type: mrr_at_1000
value: 58.021
- type: mrr_at_3
value: 54.762
- type: mrr_at_5
value: 56.19
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 30.359
- type: ndcg_at_100
value: 41.284
- type: ndcg_at_1000
value: 52.30200000000001
- type: ndcg_at_3
value: 36.744
- type: ndcg_at_5
value: 34.326
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 26.122
- type: precision_at_100
value: 8.082
- type: precision_at_1000
value: 1.559
- type: precision_at_3
value: 40.136
- type: precision_at_5
value: 35.510000000000005
- type: recall_at_1
value: 3.567
- type: recall_at_10
value: 19.045
- type: recall_at_100
value: 49.979
- type: recall_at_1000
value: 84.206
- type: recall_at_3
value: 8.52
- type: recall_at_5
value: 13.103000000000002
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 68.8394
- type: ap
value: 13.454399712443099
- type: f1
value: 53.04963076364322
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.546123372948514
- type: f1
value: 60.86952793277713
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.10042955060234
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.03308100375514
- type: cos_sim_ap
value: 71.08284605869684
- type: cos_sim_f1
value: 65.42539436255494
- type: cos_sim_precision
value: 64.14807302231237
- type: cos_sim_recall
value: 66.75461741424802
- type: dot_accuracy
value: 84.68736961316088
- type: dot_ap
value: 69.20524036530992
- type: dot_f1
value: 63.54893953365829
- type: dot_precision
value: 63.45698500394633
- type: dot_recall
value: 63.641160949868066
- type: euclidean_accuracy
value: 85.07480479227513
- type: euclidean_ap
value: 71.14592761009864
- type: euclidean_f1
value: 65.43814432989691
- type: euclidean_precision
value: 63.95465994962216
- type: euclidean_recall
value: 66.99208443271768
- type: manhattan_accuracy
value: 85.06288370984085
- type: manhattan_ap
value: 71.07289742593868
- type: manhattan_f1
value: 65.37585421412301
- type: manhattan_precision
value: 62.816147859922175
- type: manhattan_recall
value: 68.15303430079156
- type: max_accuracy
value: 85.07480479227513
- type: max_ap
value: 71.14592761009864
- type: max_f1
value: 65.43814432989691
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.79058485659952
- type: cos_sim_ap
value: 83.7183187008759
- type: cos_sim_f1
value: 75.86921142180798
- type: cos_sim_precision
value: 73.00683371298405
- type: cos_sim_recall
value: 78.96519864490298
- type: dot_accuracy
value: 87.0085768618776
- type: dot_ap
value: 81.87467488474279
- type: dot_f1
value: 74.04188363990559
- type: dot_precision
value: 72.10507114191901
- type: dot_recall
value: 76.08561749307053
- type: euclidean_accuracy
value: 87.8332751193387
- type: euclidean_ap
value: 83.83585648120315
- type: euclidean_f1
value: 76.02582177042369
- type: euclidean_precision
value: 73.36388371759989
- type: euclidean_recall
value: 78.88820449645827
- type: manhattan_accuracy
value: 87.87208444910156
- type: manhattan_ap
value: 83.8101950642973
- type: manhattan_f1
value: 75.90454195535027
- type: manhattan_precision
value: 72.44419564761039
- type: manhattan_recall
value: 79.71204188481676
- type: max_accuracy
value: 87.87208444910156
- type: max_ap
value: 83.83585648120315
- type: max_f1
value: 76.02582177042369
license: mit
language:
- en
pipeline_tag: sentence-similarity
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector database for LLMs.
************* 🌟**Updates**🌟 *************
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [**this**](#using-langchain); C-MTEB **leaderboard** is [avaliable](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!**
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | Description | query instruction for retrieval\* |
|:-------------------------------|:--------:| :--------:| :--------:|
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | rank **2nd** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | Chinese | This model is trained without instruction, and rank **2nd** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | a base-scale model but has similar ability with `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
\*: If you need to search the **long** relevant passages to a **short** query (s2p retrieval task), you need to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** need to be added to passages.
## Usage
Here are some examples to use `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences = ["样例数据-1", "样例数据-2"]
model = FlagModel('BAAI/bge-large-zh', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
embeddings_1 = model.encode(sentences)
embeddings_2 = model.encode(sentences)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, please use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
The value of argument `query_instruction_for_retrieval` see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
FlagModel will use all available GPUs when encoding, please set `os.environ["CUDA_VISIBLE_DEVICES"]` to choose GPU.
#### Using Sentence-Transformers
Using this model also is easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences = ["样例数据-1", "样例数据-2"]
model = SentenceTransformer('BAAI/bge-large-zh')
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-small-en"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model_norm = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
```
#### Using HuggingFace Transformers
With transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh')
model = AutoModel.from_pretrained('BAAI/bge-large-zh')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
More details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**bge-large-en**](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | **63.98** | **53.9** | **46.98** | 85.8 | **59.48** | 81.56 | 32.06 | **76.21** |
| [**bge-base-en**](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [**bge-small-en**](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 384 | 512 | 56.53 | 42.69 | 41.81 | 82.41 | 58.44 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 384 | 512 | 56.26 | 41.95 | 42.35 | 82.37 | 58.04 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 768 | 512 | 56.00 | 41.88 | 41.1 | 82.54 | 53.14 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 768 | 512 | 55.27 | 33.63 | 40.21 | 85.18 | 53.09 | 81.14 | 31.39 | 69.81 |
- **C-MTEB**:
We create a benchmark C-MTEB for chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**bge-large-zh**](https://huggingface.co/BAAI/bge-large-zh) | 1024 | **64.20** | **71.53** | **53.23** | **78.94** | 72.26 | **65.11** | 48.39 |
| [**bge-large-zh-noinstruct**](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 50.98 | 76.77 | **72.49** | 64.91 | **50.01** |
| [**BAAI/bge-base-zh**](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 52.05 | 77.5 | 70.98 | 64.91 | 47.63 |
| [**BAAI/bge-small-zh**](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 46.87 | 70.35 | 67.78 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 |56.91 | 48.15 | 63.99 | 70.28 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 |54.75 | 48.64 | 64.3 | 71.22 | 59.66 | 48.88 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 40.61 | 69.56 | 67.38 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 39.41 | 66.62 | 65.29 | 49.25 | 44.39 |
| [text2vec](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 41.71 | 67.41 | 65.18 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 41.98 | 70.86 | 63.42 | 49.16 | 30.02 |
## Train
This section will introduce the way we used to train the general embedding.
The training scripts are in [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md),
and we provide some examples to do [pre-train](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/pretrain/README.md) and [fine-tune](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/README.md).
**1. RetroMAE Pre-train**
We pre-train the model following the method [retromae](https://github.com/staoxiao/RetroMAE),
which shows promising improvement in retrieval task ([paper](https://aclanthology.org/2022.emnlp-main.35.pdf)).
The pre-training was conducted on 24 A100(40G) GPUs with a batch size of 720.
In retromae, the mask ratio of encoder and decoder are 0.3, 0.5 respectively.
We used the AdamW optimizer and the learning rate is 2e-5.
**Pre-training data**:
- English:
- [Pile](https://pile.eleuther.ai/)
- [wikipedia](https://huggingface.co/datasets/wikipedia)
- [msmarco](https://huggingface.co/datasets/Tevatron/msmarco-passage-corpus)
- Chinese:
- [wudao](https://github.com/BAAI-WuDao/Data)
**2. Finetune**
We fine-tune the model using a contrastive objective.
The format of input data is a triple`(query, positive, negative)`.
Besides the negative in the triple, we also adopt in-batch negatives strategy.
We employ the cross-device negatives sharing method to share negatives among different GPUs,
which can dramatically **increase the number of negatives**.
We trained our model on 48 A100(40G) GPUs with a large batch size of 32,768 (so there are **65,535** negatives for each query in a batch).
We used the AdamW optimizer and the learning rate is 1e-5.
The temperature for contrastive loss is 0.01.
Besides, we add instruction to the query for s2p(short query to long passage) retrieval task in the training (add nothing to passages).
For English, the instruction is `Represent this sentence for searching relevant passages: `;
For Chinese, the instruction is `为这个句子生成表示以用于检索相关文章:`.
In the evaluation, the instruction should be added for queries in retrieval task, not be added for other tasks.
Noted that the instruction is not needed for passages.
The finetune script is accessible in this repository: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
You can easily finetune your model with it.
**Training data**:
- For English, we collect 230M text pairs from [wikipedia](https://huggingface.co/datasets/wikipedia), [cc-net](https://github.com/facebookresearch/cc_net), and so on.
- For chinese, we collect 120M text pairs from [wudao](https://github.com/BAAI-WuDao/Data), [simclue](https://github.com/CLUEbenchmark/SimCLUE) and so on.
**The data collection is to be released in the future.**
We will continually update the embedding models and training codes,
hoping to promote the development of the embedding model community.
## License
FlagEmbedding is licensed under [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
thinkermode/jennaortega-sdxl-db
|
thinkermode
| 2023-08-20T11:38:46Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-20T11:38:43Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: jennaortega
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Agneev/distilhubert-finetuned-gtzan
|
Agneev
| 2023-08-20T11:03:55Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-17T14:34:53Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5746
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0253 | 0.99 | 28 | 1.8206 | 0.38 |
| 1.3127 | 1.98 | 56 | 1.1930 | 0.64 |
| 0.9726 | 2.97 | 84 | 0.9269 | 0.69 |
| 1.2272 | 4.0 | 113 | 1.1682 | 0.66 |
| 0.6441 | 4.99 | 141 | 0.9781 | 0.71 |
| 0.5447 | 5.98 | 169 | 0.8603 | 0.74 |
| 0.3067 | 6.97 | 197 | 0.6313 | 0.86 |
| 0.1481 | 8.0 | 226 | 0.5746 | 0.89 |
| 0.0599 | 8.99 | 254 | 0.7602 | 0.84 |
| 0.0306 | 9.91 | 280 | 0.8119 | 0.81 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Muhammadreza/mann-e-dark-fantasy
|
Muhammadreza
| 2023-08-20T11:02:38Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-20T10:50:00Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mann-e_dark_fantasy Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
beaugogh/Llama2-7b-openorca-mc-v1
|
beaugogh
| 2023-08-20T10:56:58Z | 1,500 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-20T10:51:52Z |
---
license: apache-2.0
---
Llama2-7b finetuned on a 10k subset of OpenOrca focusing on multiple choice questions.
|
abdelhamidmalki/a2c-PandaReachDense-v3
|
abdelhamidmalki
| 2023-08-20T10:42:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T10:37:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BanUrsus/Reinforce-CartPole-v1
|
BanUrsus
| 2023-08-20T10:34:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T10:33:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 482.50 +/- 52.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hub-bla/ppo-Huggy
|
hub-bla
| 2023-08-20T10:32:03Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-20T10:16:39Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hub-bla/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rafay/q-Taxi-v3
|
rafay
| 2023-08-20T10:30:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T10:30:32Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rafay/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bhavyagiri/roberta-base-finetuned-imdb-spoilers
|
bhavyagiri
| 2023-08-20T10:20:16Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"imdb",
"spoilers",
"en",
"dataset:bhavyagiri/imdb-spoiler",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-18T09:01:48Z |
---
license: mit
datasets:
- bhavyagiri/imdb-spoiler
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
tags:
- text-classification
- pytorch
- roberta
- imdb
- spoilers
widget:
- text: Jack Ryan is so amazing
---
The model trained from [roberta-base](https://huggingface.co/roberta-base) on the [imdb-spoiler](https://huggingface.co/datasets/bhavyagiri/imdb-spoiler) dataset for classification.
[imdb-spoiler](https://huggingface.co/datasets/bhavyagiri/imdb-spoiler) is a subset of a [large-dataset](https://www.kaggle.com/datasets/rmisra/imdb-spoiler-dataset) for classifying whether a movie review is a spoiler or not.
The model was trained using `AutoModelForSequenceClassification.from_pretrained` for 3 epochs with a learning rate of 2e-5 and weight decay of 0.01.
Evaluation using the dataset validation split gives:
- F1 0.773021
- Accuracy 0.783275
Labels:
- 0 - Not Spoiler
- 1 - Spoiler
|
Verdiola/Tosho
|
Verdiola
| 2023-08-20T10:13:19Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-19T17:24:33Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
datasets:
- eli5
model-index:
- name: Tosho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tosho
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8718 | 1.0 | 1135 | 3.7708 |
| 3.7688 | 2.0 | 2270 | 3.7576 |
| 3.7376 | 3.0 | 3405 | 3.7548 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Adrianosoprano/LunarLander-v2
|
Adrianosoprano
| 2023-08-20T10:11:36Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T09:46:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.72 +/- 19.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anubhav100rao/flipkart-grid-asi
|
anubhav100rao
| 2023-08-20T10:04:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-20T09:50:36Z |
# StyleForge
This project is built to revolutionize the fashion and e-commerce industries, allowing users to choose and generate images as required by passing in the requirements as text prompts.
Chat with images as often as you like, allowing flexible modification.
## Authors
- [@anubhav](https://www.github.com/anubhav100rao)
- [@isha](https://www.github.com/isharawat)
- [@sachin](https://www.github.com/sachin-raghuwanshi)
## Badges
badges addressing the dependencies and licenses for the project discussed below:
Badges added from : [shields.io](https://shields.io/)
[](https://choosealicense.com/licenses/mit/)
[](https://opensource.org/licenses/)
[](http://www.gnu.org/licenses/agpl-3.0)
## Contributing
Contributions are always welcome!
See `contributing.md` for ways to get started.
Please adhere to this project's `code of conduct`.
## Deployment
this project is deployed on huggingfaces using the free architecture :
```
https://huggingface.co/spaces/Sambhavnoobcoder/StyleForge
```
the hardware that is used to run this project is
```
- CPU Basic
- 2 vCPU
- 16 GB RAM
```
## Screenshots
certain prompt samples and their respective outputs :
<img width="1246" alt="Screenshot 2023-08-19 at 1 39 28 PM" src="https://github.com/sambhavnoobcoder/StyleForge/assets/94298612/b5b4befa-3d12-47cd-812d-ef70f26189a8">
<img width="455" alt="Screenshot 2023-08-19 at 6 11 25 PM" src="https://github.com/sambhavnoobcoder/StyleForge/assets/94298612/713fb884-f2d8-4482-9798-f70290e23220">
<img width="1316" alt="Screenshot 2023-08-19 at 2 30 12 PM" src="https://github.com/sambhavnoobcoder/StyleForge/assets/94298612/c9eee045-417b-4a79-80c4-b450f2a7898c">
<img width="900" alt="Screenshot 2023-08-19 at 5 52 06 PM" src="https://github.com/sambhavnoobcoder/StyleForge/assets/94298612/6c128826-b776-45d7-a59d-007450cc98da">
## Note:
- this is a deterministic model, so it can potentially generate different outputs for the same prompts . try this to generate more items matching the same description.
- the model is a highly resource-expensive model requiring GPU for good inference. Since the hosted inference is actually CPU, it can take more time than expected to run the prompts . kindly wait patiently for the generation interval of the prompt.
|
jmig-costa/attributes_concept_t5_xl
|
jmig-costa
| 2023-08-20T09:45:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T09:45:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Maxph2211/q-Taxi-v3
|
Maxph2211
| 2023-08-20T09:41:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T09:41:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Maxph2211/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dkimds/ppo-Pyramids-Training
|
dkimds
| 2023-08-20T09:39:00Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-20T09:38:58Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dkimds/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pasto2003/ppo-SnowballTarget
|
pasto2003
| 2023-08-20T09:09:41Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-20T09:09:37Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pasto2003/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lycnight/roberta-large-peft-p-tuning
|
lycnight
| 2023-08-20T08:56:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T08:56:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
nagupv/Stable13B_contextLLMExam_18kv2_f1
|
nagupv
| 2023-08-20T08:53:12Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T08:52:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
polejowska/deta-cd45rb-4ah-4l
|
polejowska
| 2023-08-20T08:51:48Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deta",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb_nan_xywh",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-19T07:03:39Z |
---
tags:
- generated_from_trainer
datasets:
- cd45rb_nan_xywh
model-index:
- name: deta-cd45rb-4ah-4l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deta-cd45rb-4ah-4l
This model is a fine-tuned version of [jozhang97/deta-swin-large](https://huggingface.co/jozhang97/deta-swin-large) on the cd45rb_nan_xywh dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.8838 | 1.0 | 4606 | 5.5818 |
| 4.3326 | 2.0 | 9212 | 5.5327 |
| 4.3118 | 3.0 | 13818 | 5.3539 |
| 4.1978 | 4.0 | 18424 | 5.1552 |
| 4.0235 | 5.0 | 23030 | 4.9679 |
| 3.9038 | 6.0 | 27636 | 4.7824 |
| 3.8628 | 7.0 | 32242 | 4.5919 |
| 3.7967 | 8.0 | 36848 | 4.4415 |
| 3.7638 | 9.0 | 41454 | 4.4006 |
| 3.7538 | 10.0 | 46060 | 4.3723 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DaniyalMufti/Reinforce-Cartpole-V1
|
DaniyalMufti
| 2023-08-20T08:49:30Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T08:49:21Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-V1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Cchychen/marian-finetuned-kde4-en-to-fr
|
Cchychen
| 2023-08-20T08:40:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-19T12:20:40Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88529894542656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ruisp/hubert-base-ls960-finetuned-gtzan
|
ruisp
| 2023-08-20T08:08:06Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-20T05:27:13Z |
---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-finetuned-gtzan
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7810
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9364 | 1.0 | 450 | 1.2781 | 0.61 |
| 1.0205 | 2.0 | 900 | 1.2654 | 0.63 |
| 0.7681 | 3.0 | 1350 | 1.6762 | 0.62 |
| 0.6968 | 4.0 | 1800 | 0.9113 | 0.78 |
| 0.0467 | 5.0 | 2250 | 1.0105 | 0.82 |
| 0.1238 | 6.0 | 2700 | 0.7810 | 0.87 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KrisPi/Wizard-Coder-0.66-Redmond-Hermes-0.33-ct2fast
|
KrisPi
| 2023-08-20T08:07:25Z | 4 | 3 |
transformers
|
[
"transformers",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-08-17T23:10:27Z |
---
license: openrail
---
**This model is a merge between 66% of Wizard Coder and 33% of Redmond Hermes Coder (which is Wizard Coder fine-tune):**
https://huggingface.co/NousResearch/Redmond-Hermes-Coder
https://huggingface.co/WizardLM/WizardCoder-15B-V1.0
Merger done by the most basic value average.
Using CTranslate2 for quantization and inference achieving as much as 37 tokens /s on RTX 3090 GPU.
Inference is done by using text-generation-webui:
Added this code and ran an update on requirements.txt: https://github.com/oobabooga/text-generation-webui/pull/2828
There is one thing extra to be changed in the code: reply = apply_extensions('output', reply) to: reply = apply_extensions('output', reply, state)
The idea was to get some of the coding abilities back that were lost in fine-tune but retain at least basic capabilities to summarize text and work with context. This experiment was also focused on using CT2 for its speed.
**I believe the presented approach is the best available compromise between speed, coding accuracy, and a little of general LLM use.**
**Please note that CT2 8bit quant seems to have better HumanEval scores than load-in-8bit**
The community now mostly focuses on making non-coding models - code as making coding models be more general seems near impossible.
However, my daily use is focused on DevOps questions, summarizing content, and script development. Further development will be around intent analysis for integration with TODO lists and calendar extracting actions and notes from my voice transcription. This model doesn't seem to work well enough on those tasks so next time will attempt actual fine-tunes of Wizard Coder or just run two models at the same time. I hope to fit under 24GB VRAM which would mean I will also evaluate 4 bit quantization.
My initial testing was checking if the model finds:
Overflow: `"what is mistake in following C++ code: int a = 1e9+7; int b = 1e9+9; int c = a*b; cout << c;"`
Out of bounds: `"what is bug in the following C++ code: int a = 100; vector <int> b(a); b[a] = 20; cout << b[a] << '\n';"`
and propose using "docker update" for `"how to stop docker container so it doesnt start every reboot"`
I have run those prompts in the loop, with different presets and ended up picking this preset:
`['temperature'] = 1.31`
`['top_p'] = 0.29`
`['top_k'] = 72`
`['repetition_penalty'] = 1.09`
Testing of the above prompts has shown that Hermes Coder CT2 was not able to answer correctly most of the time while Wizard Coder and this merge did. The merged model seems to retain the ability to use "### Input:" in the prompt and became more sensitive to non-coding instruction. (Wizard Coder almost completely disregards it)
In the bottom you can see EvalPlus benchmarks of three mentioned models - seems they all performed in a similar way with the default preset. I'm not sure if I'm not doing the benchmark right or if those quants are not working properly with default preset. As I noticed custom preset considerably improved the result.
**I would greatly appreciate if anyone can confirm how good this model is with proposed preset as the result I got really positively suprised me.(seems better than any other Wizard Coder 8bit quant**
**CT2 int8_float16 merge, custom preset:**
`Base`
`{'pass@1': 0.47560975609756095}`
`Base + Extra`
`{'pass@1': 0.45121951219512196}`
**For summarization I propose following prompt:**
`Below is an instruction that describes a task. Write a response that appropriately completes the request.`
`### Instruction:`
`Please provide a concise, summary for each topic presented in the input below. Ensure clarity, coherence, and avoid redundant information.`
`### Input:`
`[CONTENT TO SUMMARIZE]`
`### Response:The summary for each topic presented in the input is as follows:`
**Optionally iterate over the output with following prompt:**
`Below is an instruction that describes a task. Write a response that appropriately completes the request.`
`### Instruction:`
`Rewrite summary from Input. Fix typos, add missing spaces. Ensure clarity, coherence, and remove redundant information.`
`### Input:`
`[OUTPUT FROM PREVIOUS PROMPT]`
`### Response:`
**HumanEval** run using: https://github.com/my-other-github-account/llm-humaneval-benchmarks/
and
`sudo docker run -v $(pwd):/app ganler/evalplus:latest --dataset humaneval --samples results/{model_name}.jsonl`
**Custom preset:**
`['temperature'] = 1.31`
`['top_p'] = 0.29`
`['top_k'] = 72`
`['repetition_penalty'] = 1.09`
**CT2 int8_float16 merge, custom preset:**
`Base`
`{'pass@1': 0.47560975609756095}`
`Base + Extra`
`{'pass@1': 0.45121951219512196}`
**one of the worse reruns:**
{'pass@1': 0.4573170731707317}
Base + Extra
{'pass@1': 0.4146341463414634}
**CT2 int8_float16 Wizard Coder:**
`Base`
`{'pass@1': 0.43902439024390244}`
`Base + Extra`
`{'pass@1': 0.3597560975609756}`
**Retry:**
`Base`
`{'pass@1': 0.42073170731707316}`
`Base + Extra`
`{'pass@1': 0.3475609756097561}`
**Full-weight Wizard Coder loaded with --load-in-8bit, custom preset:**
`Base`
`{'pass@1': 0.3475609756097561}`
`Base + Extra`
`{'pass@1': 0.3170731707317073}`
---
**Default llm-humaneval-benchmarks preset:**
`['temperature'] = 1`
`['top_p'] = 1`
`['top_k'] = 0`
`['repetition_penalty'] = 1`
**CT2 int8_float16 - this model:**
`Base`
`{'pass@1': 0.4634146341463415}`
`Base + Extra`
`{'pass@1': 0.4024390243902439}`
**CT2 int8_float16 Redmond Hermes Coder:**
`Base`
`{'pass@1': 0.4695121951219512}`
`Base + Extra`
`{'pass@1': 0.4146341463414634}`
**CT2 int8_float16 Wizard Coder:**
`Base`
`{'pass@1': 0.4695121951219512}`
`Base + Extra`
`{'pass@1': 0.3902439024390244}`
**Full-weight Wizard Coder loaded with --load-in-8bit, default preset:**
`Base`
`{'pass@1': 0.43902439024390244}`
`Base + Extra`
`{'pass@1': 0.3719512195121951}`
**Full-weight merged model loaded with --load-in-8bit, default preset:**
Base
{'pass@1': 0.43902439024390244}
Base + Extra
{'pass@1': 0.3902439024390244}
**Full-weight Hermes Coder model loaded with --load-in-8bit, default preset:**
Base
{'pass@1': 0.4451219512195122}
Base + Extra
{'pass@1': 0.4146341463414634}
--------------
|
Yeatee/ppo-LunarLander-v2
|
Yeatee
| 2023-08-20T07:59:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-19T07:20:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.47 +/- 17.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
IAMNawaf/SA-Hisroty-9
|
IAMNawaf
| 2023-08-20T07:22:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"ar",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-18T23:38:22Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Naseej-SA-History-QA
results: []
language:
- ar
library_name: transformers
pipeline_tag: question-answering
widget:
- text: "من تولى الإمارة بعـد وفـاة سـعود بـن محمـد بـن مقـرن"
context: "بعـد وفـاة سـعود بـن محمـد بـن مقـرن تولـى الإمـارة زيـد بـن مرخـان بـن وطبـان، وكان الأكبـر سـناً مـن آل سـعود، ولكـن حكمـه لـم يمتـد طويـ ًا لكبـر سـنه، ممـا دعـا مقـرن بـن محمـد بـن مقـرن إلـى انتـزاع الإمـارة منـه، لكـن الأمـور لـم تسـتمر طويـ ًا لمقـرن، وذلـك عندمـا حـاول الغـدر بزيـد بـن مرخـان الـذي كان يحكـم قبلـه، ممـا دعـا محمـد بـن سـعود ومقـرن بـن عبداللـه إلـى قتلـه، وكان ذلـك سـنة 1139 هــ 1727/ م.\n\nبعـد ذلـك عـاد إلـى الإمـارة زيـد بـن مرخـان، إلا أنـه عندمـا هجـم علـى إمـارة العيينـة سـعت - بعـد ذلـك - إلـى التحايـل عليـه وطلبـت التفـاوض معـه، وعندمـا ذهـب تم قتلـه، وبعـد قتـل زيـد بـن مرخـان تولـى محمـد بـن سـعود بـن مقـرن الإمـارة فـي الدرعيـة سـنة 1139 هــ 1727/ م ، وظـل حكمـه حتـى سـنة 1179 هـ 1765/ م."
example_title: "تاريخ المملكة العربية السعودية"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Naseej-SA-History-QA
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0791
## Model description
The Naseej-SA-History-QA model is a fine-tuned version of the aubmindlab/bert-base-arabertv02 pre-trained BERT model.
It has been tailored and optimized for question answering tasks related to the history of Saudi Arabia.
The model is designed to comprehend historical context and provide accurate answers to questions in Arabic language.
## Intended uses & limitations
The Naseej-SA-History-QA model is intended to be used for answering historical questions specifically related to the history of Saudi Arabia. It can be employed in educational and research settings to assist students, scholars, and researchers in obtaining information about Saudi Arabian history. The model can also be utilized in various NLP applications where historical context is a key factor, such as building educational platforms, historical archives, and language translation tools.
The model's performance is contingent upon the quality and accuracy of the training and evaluation data it has been fine-tuned on. It may struggle with questions that deviate significantly from the training data distribution.
The model's understanding of historical events and context is based on the data it has been trained on. It may not perform well on questions involving more recent or less documented historical events.
The model may not fully comprehend nuanced or highly specific historical inquiries that require deep contextual understanding beyond the scope of its training data.
## Training and evaluation data
The Naseej-SA-History-QA model was trained using a custom dataset comprising historical questions and corresponding context passages related to the history of Saudi Arabia. The dataset covers various historical periods and events, providing the model with a wide range of historical context to learn from.
The evaluation set used during training was designed to assess the model's performance on question answering tasks. The evaluation set includes a variety of questions and context passages that challenge the model's ability to accurately answer questions about Saudi Arabian history.
## Training procedure
The Naseej-SA-History-QA model was fine-tuned using the aubmindlab/bert-base-arabertv02 pre-trained BERT model. The training process involved several key steps:
Dataset Preparation: A custom dataset was curated for training the model. The dataset consisted of pairs of historical questions and corresponding context passages, both in Arabic language. The context passages provided the necessary historical context for answering the questions.
Tokenization: The dataset was tokenized using the Tokenizers library, which converts text into a format that the model can understand. Tokenization converts words and subwords into numerical tokens that the model can process.
Model Fine-Tuning: The tokenized dataset was used to fine-tune the aubmindlab/bert-base-arabertv02 base model using the Transformers library. During fine-tuning, the model was adjusted to perform well on the specific task of question answering related to Saudi Arabian history.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 11 | 4.4161 |
| No log | 2.0 | 22 | 4.1722 |
| No log | 3.0 | 33 | 3.7147 |
| No log | 4.0 | 44 | 3.4012 |
| No log | 5.0 | 55 | 3.2906 |
| No log | 6.0 | 66 | 3.2351 |
| No log | 7.0 | 77 | 3.0865 |
| No log | 8.0 | 88 | 3.1011 |
| No log | 9.0 | 99 | 3.0791 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Yeatee/Taxi-v3
|
Yeatee
| 2023-08-20T07:07:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T07:07:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Yeatee/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
namphan1999/results3
|
namphan1999
| 2023-08-20T07:04:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-20T05:32:46Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: results3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1390
- Rouge1: 0.4949
- Rouge2: 0.4142
- Rougel: 0.4567
- Rougelsum: 0.4681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 221 | 0.1354 | 0.4748 | 0.3694 | 0.4220 | 0.4369 |
| No log | 2.0 | 442 | 0.1342 | 0.4809 | 0.3809 | 0.4309 | 0.4458 |
| 0.1539 | 3.0 | 663 | 0.1327 | 0.4837 | 0.3900 | 0.4410 | 0.4530 |
| 0.1539 | 4.0 | 884 | 0.1347 | 0.4813 | 0.3876 | 0.4374 | 0.4502 |
| 0.1013 | 5.0 | 1105 | 0.1344 | 0.4897 | 0.4001 | 0.4466 | 0.4579 |
| 0.1013 | 6.0 | 1326 | 0.1376 | 0.4901 | 0.4054 | 0.4520 | 0.4632 |
| 0.0691 | 7.0 | 1547 | 0.1355 | 0.4914 | 0.4068 | 0.4497 | 0.4622 |
| 0.0691 | 8.0 | 1768 | 0.1383 | 0.4959 | 0.4153 | 0.4562 | 0.4679 |
| 0.0691 | 9.0 | 1989 | 0.1389 | 0.4952 | 0.4147 | 0.4580 | 0.4690 |
| 0.0533 | 10.0 | 2210 | 0.1390 | 0.4949 | 0.4142 | 0.4567 | 0.4681 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sarwarbeing/Reinforce-cartpole
|
sarwarbeing
| 2023-08-20T06:52:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T06:52:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ai4bharat/indicwav2vec-odia
|
ai4bharat
| 2023-08-20T06:29:42Z | 167 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"asr",
"or",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-20T06:10:25Z |
---
language: or
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- wav2vec2
- asr
license: apache-2.0
---
# IndicWav2Vec-Hindi
This is a [Wav2Vec2](https://arxiv.org/abs/2006.11477) style ASR model trained in [fairseq](https://github.com/facebookresearch/fairseq) and ported to Hugging Face.
More details on datasets, training-setup and conversion to HuggingFace format can be found in the [IndicWav2Vec](https://github.com/AI4Bharat/IndicWav2Vec) repo.
## Script to Run Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
DEVICE_ID = "cuda" if torch.cuda.is_available() else "cpu"
MODEL_ID = "ai4bharat/indicwav2vec-odia"
sample = next(iter(load_dataset("common_voice", "or", split="test", streaming=True)))
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48000, 16000).numpy()
model = AutoModelForCTC.from_pretrained(MODEL_ID).to(DEVICE_ID)
processor = AutoProcessor.from_pretrained(MODEL_ID)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values.to(DEVICE_ID)).logits.cpu()
prediction_ids = torch.argmax(logits, dim=-1)
output_str = processor.batch_decode(prediction_ids)[0]
print(f"Greedy Decoding: {output_str}")
```
# **About AI4Bharat**
- Website: https://ai4bharat.org/
- Code: https://github.com/AI4Bharat
- HuggingFace: https://huggingface.co/ai4bharat
|
tum-nlp/IDMGSP-RoBERTa-TRAIN_GPT3-CONCLUSION
|
tum-nlp
| 2023-08-20T06:23:55Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:tum-nlp/IDMGSP",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T13:35:21Z |
---
datasets:
- tum-nlp/IDMGSP
license: openrail++
---
|
Cchychen/distilbert-base-uncased-finetuned-imdb
|
Cchychen
| 2023-08-20T06:18:24Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-19T12:57:25Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7026 | 1.0 | 157 | 2.4957 |
| 2.581 | 2.0 | 314 | 2.4286 |
| 2.5363 | 3.0 | 471 | 2.4515 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
timm/ghostnetv2_130.in1k
|
timm
| 2023-08-20T06:13:25Z | 285 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2211.12905",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-08-20T06:13:14Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for ghostnetv2_130.in1k
A GhostNetV2 image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 9.0
- GMACs: 0.3
- Activations (M): 5.9
- Image size: 224 x 224
- **Papers:**
- GhostNetV2: Enhance Cheap Operation with Long-Range Attention: https://arxiv.org/abs/2211.12905
- **Original:** https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('ghostnetv2_130.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ghostnetv2_130.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 20, 112, 112])
# torch.Size([1, 32, 56, 56])
# torch.Size([1, 52, 28, 28])
# torch.Size([1, 104, 14, 14])
# torch.Size([1, 208, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ghostnetv2_130.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1248, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{tang2022ghostnetv2,
title={GhostNetv2: enhance cheap operation with long-range attention},
author={Tang, Yehui and Han, Kai and Guo, Jianyuan and Xu, Chang and Xu, Chao and Wang, Yunhe},
journal={Advances in Neural Information Processing Systems},
volume={35},
pages={9969--9982},
year={2022}
}
```
|
HimashaJ96/falcon-7b-chat-oasst1
|
HimashaJ96
| 2023-08-20T05:48:50Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T05:48:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
tabbit/ppo-LunarLander-v2
|
tabbit
| 2023-08-20T05:45:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-20T05:45:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.65 +/- 19.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/athena_asamiya_thekingoffighters
|
CyberHarem
| 2023-08-20T05:45:19Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/athena_asamiya_thekingoffighters",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-20T05:39:27Z |
---
license: mit
datasets:
- CyberHarem/athena_asamiya_thekingoffighters
pipeline_tag: text-to-image
tags:
- art
---
# Lora of athena_asamiya_thekingoffighters
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/athena_asamiya_thekingoffighters.pt` as the embedding and `1500/athena_asamiya_thekingoffighters.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `athena_asamiya_thekingoffighters`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------------|
| 1500 |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/athena_asamiya_thekingoffighters.zip) |
| 1400 |  |  |  |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/athena_asamiya_thekingoffighters.zip) |
| 1300 |  |  |  |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/athena_asamiya_thekingoffighters.zip) |
| 1200 |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/athena_asamiya_thekingoffighters.zip) |
| 1100 |  |  |  |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/athena_asamiya_thekingoffighters.zip) |
| 1000 |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/athena_asamiya_thekingoffighters.zip) |
| 900 |  |  |  |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/athena_asamiya_thekingoffighters.zip) |
| 800 |  |  |  |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/athena_asamiya_thekingoffighters.zip) |
| 700 |  |  |  |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/athena_asamiya_thekingoffighters.zip) |
| 600 |  |  |  |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/athena_asamiya_thekingoffighters.zip) |
| 500 |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/athena_asamiya_thekingoffighters.zip) |
| 400 |  |  |  |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/athena_asamiya_thekingoffighters.zip) |
| 300 |  |  |  |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/athena_asamiya_thekingoffighters.zip) |
| 200 |  |  |  |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/athena_asamiya_thekingoffighters.zip) |
| 100 |  |  |  |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/athena_asamiya_thekingoffighters.zip) |
|
hhs8746/book_test
|
hhs8746
| 2023-08-20T05:23:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T05:22:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
YuqiChen/model
|
YuqiChen
| 2023-08-20T05:16:20Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-16T22:09:33Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: quanguomeizhan oil painting art
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - YuqiChen/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on quanguomeizhan oil painting art using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
swl-models/Nordrin_little-v2.0
|
swl-models
| 2023-08-20T05:15:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-20T04:42:12Z |
---
license: creativeml-openrail-m
---
|
ericalt/a2c-PandaReachDense-v2
|
ericalt
| 2023-08-20T05:08:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-28T04:38:13Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.46 +/- 0.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
DeepBird/my_custom_ddpm_train
|
DeepBird
| 2023-08-20T05:01:57Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-model-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-20T04:46:16Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-model-class
---
# 这个模型用于生成蝴蝶图像
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('DeepBird/my_custom_ddpm_train')
image = pipeline().images[0]
image
```
|
alokedeep/xlm-roberta-base-finetuned-panx-de
|
alokedeep
| 2023-08-20T04:57:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-18T12:34:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8638609643891634
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2656 | 1.0 | 525 | 0.1535 | 0.8263 |
| 0.1257 | 2.0 | 1050 | 0.1436 | 0.8457 |
| 0.0818 | 3.0 | 1575 | 0.1352 | 0.8639 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
swl-models/Nordrin_little-v1.0
|
swl-models
| 2023-08-20T04:55:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-20T04:42:01Z |
---
license: creativeml-openrail-m
---
|
hhs8746/ttest2
|
hhs8746
| 2023-08-20T04:51:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T04:51:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
KhaZix0827/test_trainer2
|
KhaZix0827
| 2023-08-20T04:32:38Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-20T04:01:49Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1032
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 306 | 0.1599 | 0.9558 |
| 0.1259 | 2.0 | 612 | 0.1144 | 0.9779 |
| 0.1259 | 3.0 | 918 | 0.1032 | 0.9767 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.0+cu116
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Thesnowic/Models
|
Thesnowic
| 2023-08-20T04:28:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-12T04:21:31Z |
Welcome!
This is where I upload my AI models!
|
devscion/pakhistoricalplaces
|
devscion
| 2023-08-20T04:18:08Z | 47 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-20T04:06:04Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### PakHistoricalPlaces Dreambooth model trained by devscion with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
aiplanet/effi-13b
|
aiplanet
| 2023-08-20T04:16:56Z | 1,338 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:kaist-ai/CoT-Collection",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T13:59:35Z |
---
license: apache-2.0
datasets:
- kaist-ai/CoT-Collection
metrics:
- accuracy
pipeline_tag: text-generation
---
# Model card for aiplanet/effi-13b
effi-13B parameters is a causal decoder-only model built by AI Planet based on Llama-2-13b-chat-hf and fine tuned using the 1.8 Million coversations from CoT dataset available in huggingface datasets. The model is made available under the Apache 2.0 license.
## Why use effi-13B-Instruct?
- This is a ready to use chat/instruct model based on Llama-2-13b-chat-hf, which provides a rationale for the context provided.
- Llama-2 is the best open-source model available. This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from **Llama-2-13b-chat-hf**
You will need at least **85-100GB of memory to swiftly run inference with effi-13b**.
## Model Details
### Model Description
This model has been fine-tuned on Chain of Thought datasets, which has context from mixed sources with corresponding rationale. The final finetuned Large Language Model(LLM) have shown enhanced capabilities of solving novel tasks by providing a reasoning.
- **Developed by:** AI Planet
- **Model type:** Casual Decoder only
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Llama-2-13b-chat-hf
### Direct Use
effi-13b has been finetuned on a Chain of Thought dataset.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
This model has been majorly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of effi-13b to develop guardrails and take appropriate precautions for any production use.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information is needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import (AutoModelForCausalLM, AutoTokenizer, pipeline)
model_card = "aiplanet/effi-13b"
#
model = AutoModelForCausalLM.from_pretrained(model_card)
tokenizer = AutoTokenizer.from_pretrained(model_card)
#
generate_text = transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True, # langchain expects the full text
task='text-generation',
# we pass model parameters here too
temperature=0.4, # 'randomness' of outputs, 0.0 is the min and 1.0 the max
max_new_tokens=512, # mex number of tokens to generate in the output
repetition_penalty=1.1 # without this output begins repeating
)
#
promt = """
Can you explain this code in detail?
def generate_stream(tokenizer, model, params, device,
context_len=2048, stream_interval=2):
prompt = params["prompt"]
l_prompt = len(prompt)
temperature = float(params.get("temperature", 1.0))
max_new_tokens = int(params.get("max_new_tokens", 256))
stop_str = params.get("stop", None)
input_ids = tokenizer(prompt).input_ids
output_ids = list(input_ids)
max_src_len = context_len - max_new_tokens - 8
input_ids = input_ids[-max_src_len:]
for i in range(max_new_tokens):
if i == 0:
out = model(
torch.as_tensor([input_ids], device=device), use_cache=True)
logits = out.logits
past_key_values = out.past_key_values
else:
attention_mask = torch.ones(
1, past_key_values[0][0].shape[-2] + 1, device=device)
out = model(input_ids=torch.as_tensor([[token]], device=device),
use_cache=True,
attention_mask=attention_mask,
past_key_values=past_key_values)
logits = out.logits
past_key_values = out.past_key_values
last_token_logits = logits[0][-1]
if device == "mps":
# Switch to CPU by avoiding some bugs in mps backend.
last_token_logits = last_token_logits.float().to("cpu")
if temperature < 1e-4:
token = int(torch.argmax(last_token_logits))
else:
probs = torch.softmax(last_token_logits / temperature, dim=-1)
token = int(torch.multinomial(probs, num_samples=1))
output_ids.append(token)
if token == tokenizer.eos_token_id:
stopped = True
else:
stopped = False
if i % stream_interval == 0 or i == max_new_tokens - 1 or stopped:
output = tokenizer.decode(output_ids, skip_special_tokens=True)
pos = output.rfind(stop_str, l_prompt)
if pos != -1:
output = output[:pos]
stopped = True
yield output
if stopped:
break
del past_key_values
"""
#
system_message = "Given your chain of thought reasoning, provide a rationale for the context in the source."
prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n{prompt}. [/INST]" # replace the command here with something relevant to your task
#
result = generate_text(prompt)
print(result[0]['generated_text'].strip().split("[/INST]")[-1])
```
## Training Details
### Training Data
effi-13b has been finetuned on https://huggingface.co/datasets/kaist-ai/CoT-Collection
The data was tokenized with the **meta-llama/Llama-2-13b-chat-hf** tokenizer.
### Training Procedure
Fine-tuning approach using PefT and Qlora(https://huggingface.co/blog/4bit-transformers-bitsandbytes)
#### Training Hyperparameters
- **Training regime:**
- lora_alpha=32,
- lora_dropout=0.05,
- r=8,
- bias="none",
- task_type="CAUSAL_LM"
#
- load_in_4bit=True,
- bnb_4bit_quant_type = "nf4",
- bnb_4bit_use_double_quant=True,
- bnb_4bit_compute_dtype=torch.bfloat16
#
- num_train_epochs = 1
- fp16 = False
- bf16 = False
- per_device_train_batch_size = 1
- per_device_eval_batch_size = 1
- gradient_accumulation_steps = 4
- gradient_checkpointing = True
- max_grad_norm = 0.3
- learning_rate = 2e-4
- weight_decay = 0.001
- optim = "paged_adamw_32bit"
- lr_scheduler_type = "constant"
- max_steps = 500
- warmup_ratio = 0.03
- group_by_length = True
- save_steps = 25
- logging_steps = 5
- max_seq_length = 2048
- packing = False
- device_map = {"": 0}
## Evaluation
Paper coming soon.
See the OpenLLM Leaderboard(https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Citation
@article{effi-13b,
title={{effi-13b}: an open large language model with state-of-the-art performance},
author={aiplanet},
year={2023}
}
## Model Card Contact
community@aiplanet.com
|
Chattiori/BismuthMix
|
Chattiori
| 2023-08-20T03:15:01Z | 0 | 6 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-03-24T02:15:58Z |
---
license: bigscience-openrail-m
---
# **Chattiori ElementMixes-83:BismuthMix**
BismuthMix is checkpoint merge of ChilloutMix, DDosMix, El Zipang, Deliberate and RetMix.
V2: Change some merge ratio, update RetMix to V2 and add real-max-v3.4 and majicMIX realistic into merges.
V3: Change some merge ratio, update majicMIX realistic, change ChilloutMix to ChillyMix and add Shampoo Mix, AIbijoModel, GeminiX Mix, LEAU, CosplayMix, epiCRealism, Lyriel, fantasticmix and XXMix 9realistic into merges.
V4: Change every merge ratios and add many models All models and merge ratios are written in [HERE](https://civitai.com/articles/654)
For V3 and V4, I used [my own model merger](https://github.com/Faildes/merge-models).
[**CivitAI**](https://civitai.com/models/23629/bismuthmix)
## Merge Source:
v1:
((Chilloutmix-Ni-pruned-fp32-fix (0.4) + DDosMix_v2 (0.6) Weighted Sum) (0.5) +
(El Zipang:v1.0 (0.7) + Deliberate v2 (0.3) Weighted Sum) (0.5) Weighted Sum) (0.7) +
RetMix (0.3) Weighted Sum
v2:
real-max-v3.4 + majicMIX realistic v2 0.6 Weighted Sum >> (1)
(1) + ChilloutMix-Ni-pruned-fp32-fix 0.65 Weighted Sum >> (2)
(2) + DDosMix V2 0.45 Weighted Sum >> (3a)
El Zipang + Deliberate V2 0.3 Weighted Sum >> (3b)
(3a) + (3b) 0.5 Weighted Sum >> (4)
(4) + RetMix V2 0.25 Weighted Sum >> BismuthMix V2
v3:
real-max v3.4 + GeminiX Mix v1.0 0.45 Weighted Sum >> (00a)
Shampoo Mix v3.0 + majicMIX realistic v4 0.5 Weighted Sum >> (00b)
(00a) + (00b) 0.4 Weighted Sum >> (0a)
CosplayMix v2.0 + LEAU v1.0 0.35 Weighted Sum >> (0b)
(0b) + (0a) 0.65 Weighted Sum >> (1a)
epiCRealism new Era + XXMix_9realistic v2.6 0.45 Weighted Sum >> (1b)
ChillyMix v1.0 + AIbijoModel no47p22 0.55 Weighted Sum >> (1c)
(1a) + (1c) 0.65 Weighted Sum >> (2)
(2) + (1b) 0.35 Weighted Sum >> (3a)
DDosMix V2 + fantasticmix v5.5 0.25 Weighted Sum >> (3b)
(3a) + (3b) 0.45 Weighted Sum >> (4a)
El Zipang + Deliberate V2 0.35 Weighted Sum >> (4b)
(4a) + (4b) 0.25 Weighted Sum >> (5)
(5) + RetMix V2 0.2 Weighted Sum >> BismuthMix V3
|
churumusco/whyblock
|
churumusco
| 2023-08-20T02:55:37Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-20T02:53:55Z |
---
license: openrail
---
MADE BY deltascheme ON DISCORD ALL CREDITS TO HIM IM JUST REUPLOADING IT
|
LarryAIDraw/Nightingale
|
LarryAIDraw
| 2023-08-20T02:24:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-19T21:00:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5651/arknights-nightingale
|
LarryAIDraw/Arknights-Specter_the_Unchained-SOFT
|
LarryAIDraw
| 2023-08-20T02:24:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-19T21:00:06Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/14458/pramanix-arknights-freefit-lora
|
ademola277/bert-base-uncased-finetuned-squad
|
ademola277
| 2023-08-20T02:14:30Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-18T03:11:17Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ademola277/bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ademola277/bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on FEVER dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train End Logits Accuracy: 1.0
- Train Start Logits Accuracy: 1.0
- Validation Loss: 0.0011
- Validation End Logits Accuracy: 0.9995
- Validation Start Logits Accuracy: 1.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 13587, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.0302 | 0.9944 | 0.9963 | 0.0024 | 0.9988 | 1.0 | 0 |
| 0.0002 | 0.9999 | 1.0000 | 0.0009 | 0.9998 | 1.0 | 1 |
| 0.0000 | 1.0 | 1.0 | 0.0011 | 0.9995 | 1.0 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
thinkermode/sdxl-db-appu
|
thinkermode
| 2023-08-20T02:00:05Z | 1 | 3 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-20T02:00:02Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: appu
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1
|
edwsiew
| 2023-08-20T01:48:21Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-20T01:48:01Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
pabloyesteb/ppo-PyramidsRND
|
pabloyesteb
| 2023-08-20T01:14:38Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-19T20:23:04Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pabloyesteb/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
akar49/Segformer-MRIseg_model
|
akar49
| 2023-08-20T01:04:55Z | 32 | 0 |
transformers
|
[
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/segformer-b0-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b0-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-08-20T01:04:37Z |
---
license: other
base_model: nvidia/segformer-b0-finetuned-ade-512-512
tags:
- generated_from_keras_callback
model-index:
- name: Segformer-MRIseg_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Segformer-MRIseg_model
This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0049
- Validation Loss: 0.0133
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2537 | 0.0685 | 0 |
| 0.0849 | 0.0639 | 1 |
| 0.0664 | 0.0532 | 2 |
| 0.0580 | 0.0503 | 3 |
| 0.0536 | 0.0497 | 4 |
| 0.0476 | 0.0396 | 5 |
| 0.0437 | 0.0477 | 6 |
| 0.0359 | 0.0397 | 7 |
| 0.0312 | 0.0289 | 8 |
| 0.0256 | 0.0322 | 9 |
| 0.0241 | 0.0279 | 10 |
| 0.0220 | 0.0229 | 11 |
| 0.0180 | 0.0226 | 12 |
| 0.0160 | 0.0192 | 13 |
| 0.0165 | 0.0227 | 14 |
| 0.0151 | 0.0194 | 15 |
| 0.0146 | 0.0184 | 16 |
| 0.0132 | 0.0177 | 17 |
| 0.0121 | 0.0211 | 18 |
| 0.0111 | 0.0197 | 19 |
| 0.0107 | 0.0175 | 20 |
| 0.0116 | 0.0131 | 21 |
| 0.0115 | 0.0181 | 22 |
| 0.0094 | 0.0153 | 23 |
| 0.0099 | 0.0140 | 24 |
| 0.0098 | 0.0151 | 25 |
| 0.0084 | 0.0126 | 26 |
| 0.0080 | 0.0140 | 27 |
| 0.0071 | 0.0128 | 28 |
| 0.0067 | 0.0169 | 29 |
| 0.0061 | 0.0131 | 30 |
| 0.0063 | 0.0207 | 31 |
| 0.0067 | 0.0129 | 32 |
| 0.0062 | 0.0152 | 33 |
| 0.0056 | 0.0148 | 34 |
| 0.0056 | 0.0171 | 35 |
| 0.0051 | 0.0154 | 36 |
| 0.0049 | 0.0172 | 37 |
| 0.0049 | 0.0180 | 38 |
| 0.0056 | 0.0168 | 39 |
| 0.0050 | 0.0142 | 40 |
| 0.0048 | 0.0165 | 41 |
| 0.0051 | 0.0195 | 42 |
| 0.0048 | 0.0232 | 43 |
| 0.0042 | 0.0208 | 44 |
| 0.0041 | 0.0249 | 45 |
| 0.0044 | 0.0220 | 46 |
| 0.0041 | 0.0234 | 47 |
| 0.0042 | 0.0198 | 48 |
| 0.0040 | 0.0282 | 49 |
| 0.0039 | 0.0251 | 50 |
| 0.0039 | 0.0302 | 51 |
| 0.0041 | 0.0219 | 52 |
| 0.0040 | 0.0187 | 53 |
| 0.0039 | 0.0203 | 54 |
| 0.0043 | 0.0180 | 55 |
| 0.0051 | 0.0150 | 56 |
| 0.0079 | 0.0205 | 57 |
| 0.0052 | 0.0152 | 58 |
| 0.0049 | 0.0133 | 59 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sehiro/LINE-ct2-jp
|
sehiro
| 2023-08-20T01:03:27Z | 0 | 2 | null |
[
"ja",
"license:apache-2.0",
"region:us"
] | null | 2023-08-20T00:46:02Z |
---
license: apache-2.0
language:
- ja
---
https://huggingface.co/line-corporation/japanese-large-lm-3.6b-instruction-sft
をctranslate2で使用するための変換済データセットです。
|
pssubitha/llama2_sf_v01
|
pssubitha
| 2023-08-20T00:11:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-20T00:11:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Daniel2tio/ppo-LunarLander-v2
|
Daniel2tio
| 2023-08-20T00:03:16Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-19T12:58:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.37 +/- 18.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TuringRM/stable-diffusion-v1-5
|
TuringRM
| 2023-08-20T00:01:32Z | 35 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-20T00:00:45Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
duplicated_from: runwayml/stable-diffusion-v1-5
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
linyangnyc/finetuning-sentiment-model-3000-samples
|
linyangnyc
| 2023-08-19T23:59:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-19T23:29:42Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8637873754152824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3334
- Accuracy: 0.8633
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
smangrul/peft-lora-DeciCoder1b-personal-copilot-A100-40GB-colab
|
smangrul
| 2023-08-19T23:36:06Z | 5 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:Deci/DeciCoder-1b",
"base_model:adapter:Deci/DeciCoder-1b",
"license:apache-2.0",
"region:us"
] | null | 2023-08-19T19:59:05Z |
---
license: apache-2.0
base_model: Deci/DeciCoder-1b
tags:
- generated_from_trainer
model-index:
- name: peft-lora-DeciCoder1b-personal-copilot-A100-40GB-colab
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-DeciCoder1b-personal-copilot-A100-40GB-colab
This model is a fine-tuned version of [Deci/DeciCoder-1b](https://huggingface.co/Deci/DeciCoder-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.831 | 0.05 | 100 | 0.7074 |
| 0.7092 | 0.1 | 200 | 0.5904 |
| 0.6971 | 0.15 | 300 | 0.5637 |
| 0.8405 | 0.2 | 400 | 0.5363 |
| 0.6548 | 0.25 | 500 | 0.5126 |
| 0.6022 | 0.3 | 600 | 0.4634 |
| 0.6568 | 0.35 | 700 | 0.4529 |
| 0.87 | 0.4 | 800 | 0.4491 |
| 0.4818 | 0.45 | 900 | 0.4438 |
| 0.5067 | 0.5 | 1000 | 0.4117 |
| 0.4578 | 0.55 | 1100 | 0.4044 |
| 0.5909 | 0.6 | 1200 | 0.4041 |
| 0.3646 | 0.65 | 1300 | 0.4027 |
| 0.4597 | 0.7 | 1400 | 0.3963 |
| 0.3385 | 0.75 | 1500 | 0.3935 |
| 0.2696 | 0.8 | 1600 | 0.3955 |
| 0.3011 | 0.85 | 1700 | 0.3966 |
| 0.2931 | 0.9 | 1800 | 0.3980 |
| 0.2904 | 0.95 | 1900 | 0.3978 |
| 0.2669 | 1.0 | 2000 | 0.3977 |
### Framework versions
- PEFT 0.5.0.dev0
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
thinkermode/sdxl-db-powerstar
|
thinkermode
| 2023-08-19T23:36:05Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-19T23:35:59Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: powerstar
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
AlexWortega/blip2-opt-2.7b-db_sasha
|
AlexWortega
| 2023-08-19T23:28:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-19T23:28:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
yangwang825/mert-base
|
yangwang825
| 2023-08-19T22:55:03Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mert_model",
"feature-extraction",
"audio",
"music",
"audio-classification",
"custom_code",
"region:us"
] |
audio-classification
| 2023-08-06T12:21:42Z |
---
pipeline_tag: audio-classification
tags:
- audio
- music
---
# MERT
MERT (Acoustic Music Understanding Model with Large-Scale Self-supervised Training) incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training.
The pre-trained weights of MERT came from [m-a-p/MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M). In this repository, we registered MERT for [AutoModelForAudioClassification](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForAudioClassification) auto class.
## Usage
```python
import numpy as np
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification
# Some configurations
model_id = 'yangwang825/mert-base'
batch_size = 4
num_classes = 10
max_duration = 1.0
# Initialise the extractor and model
feature_extractor = AutoFeatureExtractor.from_pretrained(
model_id,
trust_remote_code=True
)
mert = AutoModelForAudioClassification.from_pretrained(
model_id,
num_labels=num_classes,
ignore_mismatched_sizes=True,
trust_remote_code=True
)
# Simulate a list of waveforms (e.g. four audio clips)
audio_arrays = [
np.random.rand(16000, ),
np.random.rand(24000, ),
np.random.rand(22050, ),
np.random.rand(44100, )
]
inputs = feature_extractor(
audio_arrays, # List of waveforms in numpy array format
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
padding='max_length',
truncation=True,
return_tensors='pt'
)
# The shape of `input_values` is (batch_size, sample_rate * max_duration)
input_values = inputs['input_values']
outputs = mert(**inputs)
# The shape of `logits` is (batch_size, num_classes)
logits = outputs['logits']
```
|
Felipe474/distilhubert-finetuned-gtzan
|
Felipe474
| 2023-08-19T22:54:30Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-23T19:05:47Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1278 | 1.0 | 113 | 1.9945 | 0.46 |
| 1.422 | 2.0 | 226 | 1.3210 | 0.63 |
| 1.0769 | 3.0 | 339 | 0.9838 | 0.77 |
| 0.8781 | 4.0 | 452 | 0.8076 | 0.75 |
| 0.6584 | 5.0 | 565 | 0.6962 | 0.79 |
| 0.4766 | 6.0 | 678 | 0.5555 | 0.84 |
| 0.3916 | 7.0 | 791 | 0.5909 | 0.84 |
| 0.1187 | 8.0 | 904 | 0.6129 | 0.81 |
| 0.1442 | 9.0 | 1017 | 0.7126 | 0.79 |
| 0.1238 | 10.0 | 1130 | 0.8089 | 0.8 |
| 0.0291 | 11.0 | 1243 | 0.8908 | 0.79 |
| 0.0821 | 12.0 | 1356 | 0.8962 | 0.81 |
| 0.0104 | 13.0 | 1469 | 0.8957 | 0.81 |
| 0.0311 | 14.0 | 1582 | 0.9264 | 0.81 |
| 0.0107 | 15.0 | 1695 | 0.9492 | 0.81 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
edwsiew/setfit-finetuned-tech-sentiment-setfit-16-20-2
|
edwsiew
| 2023-08-19T22:19:34Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-19T22:19:14Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# edwsiew/setfit-finetuned-tech-sentiment-setfit-16-20-2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("edwsiew/setfit-finetuned-tech-sentiment-setfit-16-20-2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
KingKazma/xsum_6789_5000000_2500000_v1_train
|
KingKazma
| 2023-08-19T22:14:00Z | 5 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-19T22:13:56Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# xsum_6789_5000000_2500000_v1_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/xsum_6789_5000000_2500000_v1_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 1005
* Number of training documents: 204045
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - league - one - police - also | 5 | -1_said_league_one_police |
| 0 | labour - eu - referendum - brexit - vote | 120934 | 0_labour_eu_referendum_brexit |
| 1 | cricket - wicket - batsman - test - bowler | 4809 | 1_cricket_wicket_batsman_test |
| 2 | murder - det - man - police - insp | 3540 | 2_murder_det_man_police |
| 3 | rugby - scarlets - lions - ospreys - coach | 2550 | 3_rugby_scarlets_lions_ospreys |
| 4 | school - education - pupil - teacher - schools | 1713 | 4_school_education_pupil_teacher |
| 5 | rail - train - transport - rmt - bridge | 1694 | 5_rail_train_transport_rmt |
| 6 | celtic - dundee - rangers - thistle - aberdeen | 1642 | 6_celtic_dundee_rangers_thistle |
| 7 | syrian - syria - turkey - kurdish - iraqi | 1310 | 7_syrian_syria_turkey_kurdish |
| 8 | nhs - patient - hospital - care - health | 1296 | 8_nhs_patient_hospital_care |
| 9 | fire - blaze - firefighter - smoke - rescue | 1086 | 9_fire_blaze_firefighter_smoke |
| 10 | foul - footed - kick - box - town | 1015 | 10_foul_footed_kick_box |
| 11 | fight - boxing - mayweather - fury - wba | 995 | 11_fight_boxing_mayweather_fury |
| 12 | mercedes - f1 - hamilton - rosberg - race | 995 | 12_mercedes_f1_hamilton_rosberg |
| 13 | space - earth - planet - mission - spacecraft | 989 | 13_space_earth_planet_mission |
| 14 | murray - tennis - wimbledon - slam - djokovic | 966 | 14_murray_tennis_wimbledon_slam |
| 15 | cancer - disease - cell - brain - patient | 961 | 15_cancer_disease_cell_brain |
| 16 | collision - crash - road - driver - junction | 954 | 16_collision_crash_road_driver |
| 17 | coastguard - lifeboat - rnli - rescue - search | 948 | 17_coastguard_lifeboat_rnli_rescue |
| 18 | dog - animal - rspca - cat - dogs | 824 | 18_dog_animal_rspca_cat |
| 19 | indecent - sexual - rape - assault - sex | 796 | 19_indecent_sexual_rape_assault |
| 20 | trump - clinton - republican - trumps - hillary | 795 | 20_trump_clinton_republican_trumps |
| 21 | film - actor - star - movie - drama | 771 | 21_film_actor_star_movie |
| 22 | golf - mcilroy - birdie - pga - par | 752 | 22_golf_mcilroy_birdie_pga |
| 23 | ukraine - russian - russia - ukrainian - putin | 707 | 23_ukraine_russian_russia_ukrainian |
| 24 | dedicated - transfer - appearance - page - latest | 702 | 24_dedicated_transfer_appearance_page |
| 25 | boko - haram - sudan - nigeria - rwanda | 685 | 25_boko_haram_sudan_nigeria |
| 26 | data - security - password - hacker - malware | 651 | 26_data_security_password_hacker |
| 27 | maduro - venezuela - venezuelan - gang - mexico | 647 | 27_maduro_venezuela_venezuelan_gang |
| 28 | album - song - band - music - chart | 647 | 28_album_song_band_music |
| 29 | medal - gold - olympic - rio - championships | 616 | 29_medal_gold_olympic_rio |
| 30 | yn - ar - wedi - ei - bod | 612 | 30_yn_ar_wedi_ei |
| 31 | taliban - pakistan - afghan - afghanistan - pakistani | 560 | 31_taliban_pakistan_afghan_afghanistan |
| 32 | gun - shooting - officer - police - black | 507 | 32_gun_shooting_officer_police |
| 33 | snooker - frame - osullivan - selby - ding | 481 | 33_snooker_frame_osullivan_selby |
| 34 | ghana - african - cameroon - burkina - nations | 468 | 34_ghana_african_cameroon_burkina |
| 35 | flood - flooding - rain - warning - water | 467 | 35_flood_flooding_rain_warning |
| 36 | korea - korean - north - kim - missile | 458 | 36_korea_korean_north_kim |
| 37 | zoo - elephant - animal - rhino - lion | 448 | 37_zoo_elephant_animal_rhino |
| 38 | dup - sinn - fin - unionist - stormont | 444 | 38_dup_sinn_fin_unionist |
| 39 | chelsea - tottenham - manchester - arsenal - hotspur | 437 | 39_chelsea_tottenham_manchester_arsenal |
| 40 | zuma - anc - mandela - kenyatta - mugabe | 429 | 40_zuma_anc_mandela_kenyatta |
| 41 | tax - wage - pension - chancellor - osborne | 408 | 41_tax_wage_pension_chancellor |
| 42 | quake - earthquake - nepal - rain - kathmandu | 398 | 42_quake_earthquake_nepal_rain |
| 43 | airport - heathrow - runway - airports - gatwick | 383 | 43_airport_heathrow_runway_airports |
| 44 | jockey - horse - stakes - trainer - racing | 382 | 44_jockey_horse_stakes_trainer |
| 45 | prison - prisoner - prisons - inmate - hmp | 380 | 45_prison_prisoner_prisons_inmate |
| 46 | planning - development - council - site - housing | 365 | 46_planning_development_council_site |
| 47 | delhi - modi - india - bjp - indias | 363 | 47_delhi_modi_india_bjp |
| 48 | derry - donegal - dundalk - tyrone - monaghan | 337 | 48_derry_donegal_dundalk_tyrone |
| 49 | migrant - asylum - refugee - migrants - greece | 323 | 49_migrant_asylum_refugee_migrants |
| 50 | wigan - replacements - super - castleford - warrington | 320 | 50_wigan_replacements_super_castleford |
| 51 | tesco - store - sale - retailer - morrisons | 299 | 51_tesco_store_sale_retailer |
| 52 | madrid - barcelona - bayern - mnchen - fc | 298 | 52_madrid_barcelona_bayern_mnchen |
| 53 | art - painting - gallery - artist - exhibition | 297 | 53_art_painting_gallery_artist |
| 54 | doping - iaaf - athlete - antidoping - wada | 290 | 54_doping_iaaf_athlete_antidoping |
| 55 | israel - palestinian - israeli - palestinians - gaza | 288 | 55_israel_palestinian_israeli_palestinians |
| 56 | paris - french - attack - brussels - abdeslam | 278 | 56_paris_french_attack_brussels |
| 57 | samsung - apple - iphone - phone - smartphone | 277 | 57_samsung_apple_iphone_phone |
| 58 | unsupported - updated - playback - device - media | 271 | 58_unsupported_updated_playback_device |
| 59 | roman - archaeologist - coin - hoard - museum | 270 | 59_roman_archaeologist_coin_hoard |
| 60 | macron - pen - fillon - sarkozy - fn | 269 | 60_macron_pen_fillon_sarkozy |
| 61 | wind - energy - turbine - lagoon - tidal | 269 | 61_wind_energy_turbine_lagoon |
| 62 | fraud - crown - money - court - false | 267 | 62_fraud_crown_money_court |
| 63 | council - budget - tax - councils - local | 261 | 63_council_budget_tax_councils |
| 64 | bird - rspb - wildlife - birds - eagle | 258 | 64_bird_rspb_wildlife_birds |
| 65 | prince - duchess - duke - princess - queen | 250 | 65_prince_duchess_duke_princess |
| 66 | ftse - pound - shares - share - index | 240 | 66_ftse_pound_shares_share |
| 67 | syria - terrorism - terrorist - emwazi - islamic | 239 | 67_syria_terrorism_terrorist_emwazi |
| 68 | somme - memorial - war - battle - soldier | 239 | 68_somme_memorial_war_battle |
| 69 | suu - kyi - rohingya - thailand - thai | 237 | 69_suu_kyi_rohingya_thailand |
| 70 | bank - rbs - banking - lloyds - barclays | 237 | 70_bank_rbs_banking_lloyds |
| 71 | hong - kong - china - chinese - bo | 236 | 71_hong_kong_china_chinese |
| 72 | fish - fishing - salmon - marine - fishery | 230 | 72_fish_fishing_salmon_marine |
| 73 | google - ad - facebook - user - advertising | 221 | 73_google_ad_facebook_user |
| 74 | hillsborough - disaster - 1989 - 96 - inquest | 221 | 74_hillsborough_disaster_1989_96 |
| 75 | inflation - rate - growth - economist - manufacturing | 220 | 75_inflation_rate_growth_economist |
| 76 | updated - gmt - bst - 2017 - 2016 | 219 | 76_updated_gmt_bst_2017 |
| 77 | bromley - replaces - substitution - barrow - ferriby | 210 | 77_bromley_replaces_substitution_barrow |
| 78 | dunlop - tt - superbike - race - supersport | 206 | 78_dunlop_tt_superbike_race |
| 79 | smoking - tobacco - ecigarettes - smoker - cigarette | 196 | 79_smoking_tobacco_ecigarettes_smoker |
| 80 | bishop - church - archbishop - diocese - marriage | 194 | 80_bishop_church_archbishop_diocese |
| 81 | froome - rider - stage - 1min - sky | 194 | 81_froome_rider_stage_1min |
| 82 | fifa - blatter - platini - fifas - sepp | 187 | 82_fifa_blatter_platini_fifas |
| 83 | broadband - bt - ofcom - openreach - superfast | 183 | 83_broadband_bt_ofcom_openreach |
| 84 | console - vr - nintendo - oculus - xbox | 183 | 84_console_vr_nintendo_oculus |
| 85 | ebola - sierra - leone - liberia - outbreak | 182 | 85_ebola_sierra_leone_liberia |
| 86 | book - novel - author - prize - writer | 182 | 86_book_novel_author_prize |
| 87 | drug - cocaine - cannabis - drugs - supply | 181 | 87_drug_cocaine_cannabis_drugs |
| 88 | drug - cannabis - heroin - psychoactive - substance | 178 | 88_drug_cannabis_heroin_psychoactive |
| 89 | train - tram - rail - railway - raib | 175 | 89_train_tram_rail_railway |
| 90 | steel - tata - talbot - plant - industry | 174 | 90_steel_tata_talbot_plant |
| 91 | nasdaq - sp - dow - 500 - index | 173 | 91_nasdaq_sp_dow_500 |
| 92 | alshabab - somalia - somali - mogadishu - kenya | 171 | 92_alshabab_somalia_somali_mogadishu |
| 93 | greece - greek - eurozone - bailout - greeces | 164 | 93_greece_greek_eurozone_bailout |
| 94 | policing - crime - constable - police - force | 161 | 94_policing_crime_constable_police |
| 95 | bombardier - cseries - manufacturing - job - jti | 160 | 95_bombardier_cseries_manufacturing_job |
| 96 | stadium - club - cardoza - northampton - sixfields | 156 | 96_stadium_club_cardoza_northampton |
| 97 | ticket - stadium - fan - ham - cheapest | 154 | 97_ticket_stadium_fan_ham |
| 98 | bomb - disposal - evacuated - explosive - object | 152 | 98_bomb_disposal_evacuated_explosive |
| 99 | price - mortgage - property - buyer - rics | 151 | 99_price_mortgage_property_buyer |
| 100 | index - benchmark - nikkei - composite - seng | 150 | 100_index_benchmark_nikkei_composite |
| 101 | iran - irans - nuclear - iranian - rouhani | 150 | 101_iran_irans_nuclear_iranian |
| 102 | housing - affordable - rent - circuit - tenant | 150 | 102_housing_affordable_rent_circuit |
| 103 | flight - passenger - airport - plane - aircraft | 150 | 103_flight_passenger_airport_plane |
| 104 | ira - psni - ombudsman - ruc - finucane | 149 | 104_ira_psni_ombudsman_ruc |
| 105 | abedi - ariana - grande - concert - manchester | 142 | 105_abedi_ariana_grande_concert |
| 106 | festival - event - festivals - strathallan - edinburghs | 141 | 106_festival_event_festivals_strathallan |
| 107 | rousseff - petrobras - lula - temer - impeachment | 136 | 107_rousseff_petrobras_lula_temer |
| 108 | eurozone - ecb - inflation - draghi - qe | 136 | 108_eurozone_ecb_inflation_draghi |
| 109 | drone - drones - aircraft - unmanned - dji | 133 | 109_drone_drones_aircraft_unmanned |
| 110 | calais - eurotunnel - migrant - tunnel - camp | 131 | 110_calais_eurotunnel_migrant_tunnel |
| 111 | hodgson - rooney - england - southgate - wayne | 130 | 111_hodgson_rooney_england_southgate |
| 112 | pilot - aaib - aircraft - crash - accidents | 130 | 112_pilot_aaib_aircraft_crash |
| 113 | waste - recycling - bin - rubbish - flytipping | 128 | 113_waste_recycling_bin_rubbish |
| 114 | cuba - cuban - castro - cubans - fidel | 128 | 114_cuba_cuban_castro_cubans |
| 115 | oil - gas - decommissioning - industry - barrel | 127 | 115_oil_gas_decommissioning_industry |
| 116 | whisky - scotch - distillery - beer - wine | 127 | 116_whisky_scotch_distillery_beer |
| 117 | energy - supplier - ofgem - meter - customer | 126 | 117_energy_supplier_ofgem_meter |
| 118 | pope - vatican - francis - cardinal - church | 124 | 118_pope_vatican_francis_cardinal |
| 119 | pp - catalan - catalonia - rajoy - podemos | 123 | 119_pp_catalan_catalonia_rajoy |
| 120 | coleman - u21 - wales - bale - tournament | 123 | 120_coleman_u21_wales_bale |
| 121 | climate - warming - temperature - carbon - co2 | 122 | 121_climate_warming_temperature_carbon |
| 122 | ireland - northern - strachan - lafferty - oneill | 122 | 122_ireland_northern_strachan_lafferty |
| 123 | transfer - premier - appearance - club - koeman | 121 | 123_transfer_premier_appearance_club |
| 124 | kosovo - bosnian - serbia - serb - serbs | 121 | 124_kosovo_bosnian_serbia_serb |
| 125 | libya - gaddafi - libyan - tripoli - libyas | 121 | 125_libya_gaddafi_libyan_tripoli |
| 126 | abortion - termination - foetal - abnormality - pregnancy | 120 | 126_abortion_termination_foetal_abnormality |
| 127 | whale - dolphin - strandings - marine - whales | 120 | 127_whale_dolphin_strandings_marine |
| 128 | pollution - air - diesel - no2 - nitrogen | 119 | 128_pollution_air_diesel_no2 |
| 129 | auschwitz - nazi - holocaust - jews - camp | 118 | 129_auschwitz_nazi_holocaust_jews |
| 130 | hms - ship - navy - shipbuilding - carrier | 115 | 130_hms_ship_navy_shipbuilding |
| 131 | farc - peace - santos - eln - colombian | 115 | 131_farc_peace_santos_eln |
| 132 | morsi - cairo - mubarak - brotherhood - egypt | 113 | 132_morsi_cairo_mubarak_brotherhood |
| 133 | parade - parades - orange - flag - belfast | 112 | 133_parade_parades_orange_flag |
| 134 | everest - climber - mountain - climbing - avalanche | 111 | 134_everest_climber_mountain_climbing |
| 135 | abuse - bennell - football - sfa - fa | 106 | 135_abuse_bennell_football_sfa |
| 136 | fracking - shale - gas - cuadrilla - drilling | 106 | 136_fracking_shale_gas_cuadrilla |
| 137 | vw - emission - volkswagen - diesel - scandal | 106 | 137_vw_emission_volkswagen_diesel |
| 138 | airline - ryanair - aer - lingus - iag | 105 | 138_airline_ryanair_aer_lingus |
| 139 | robot - ai - computer - machine - robots | 104 | 139_robot_ai_computer_machine |
| 140 | raf - bomber - squadron - aircraft - lancaster | 104 | 140_raf_bomber_squadron_aircraft |
| 141 | fossil - dinosaur - homo - specie - bone | 103 | 141_fossil_dinosaur_homo_specie |
| 142 | border - ireland - northern - irish - brexit | 103 | 142_border_ireland_northern_irish |
| 143 | ferry - calmac - ferries - mv - vessel | 102 | 143_ferry_calmac_ferries_mv |
| 144 | inquiry - abuse - inquirys - kincora - survivor | 102 | 144_inquiry_abuse_inquirys_kincora |
| 145 | airbus - boeing - airline - qantas - aircraft | 102 | 145_airbus_boeing_airline_qantas |
| 146 | climate - paris - emission - carbon - agreement | 102 | 146_climate_paris_emission_carbon |
| 147 | nama - cushnahan - cerberus - portfolio - pimco | 101 | 147_nama_cushnahan_cerberus_portfolio |
| 148 | labor - turnbull - abbott - shorten - rudd | 100 | 148_labor_turnbull_abbott_shorten |
| 149 | yemen - houthis - hadi - houthi - sanaa | 99 | 149_yemen_houthis_hadi_houthi |
| 150 | milk - dairy - farmer - farmers - farm | 98 | 150_milk_dairy_farmer_farmers |
| 151 | fed - rate - yellen - feds - growth | 98 | 151_fed_rate_yellen_feds |
| 152 | selfdriving - driverless - autonomous - car - vehicle | 97 | 152_selfdriving_driverless_autonomous_car |
| 153 | nauru - asylum - seeker - australia - manus | 96 | 153_nauru_asylum_seeker_australia |
| 154 | terrorism - arrested - counter - suspicion - instigation | 95 | 154_terrorism_arrested_counter_suspicion |
| 155 | cox - jo - mair - birstall - coxs | 93 | 155_cox_jo_mair_birstall |
| 156 | levien - club - kaplan - takeover - bolton | 93 | 156_levien_club_kaplan_takeover |
| 157 | vaccine - meningitis - vaccination - measles - flu | 92 | 157_vaccine_meningitis_vaccination_measles |
| 158 | refugee - asylum - refugees - syrian - resettlement | 92 | 158_refugee_asylum_refugees_syrian |
| 159 | mali - malian - aqim - tuareg - bamako | 91 | 159_mali_malian_aqim_tuareg |
| 160 | oil - barrel - opec - price - crude | 89 | 160_oil_barrel_opec_price |
| 161 | button - live - bbc - sport - highlights | 88 | 161_button_live_bbc_sport |
| 162 | driving - driver - speed - speeding - road | 88 | 162_driving_driver_speed_speeding |
| 163 | zika - virus - microcephaly - mosquito - pregnant | 87 | 163_zika_virus_microcephaly_mosquito |
| 164 | marathon - runner - running - race - mile | 86 | 164_marathon_runner_running_race |
| 165 | childrens - ofsted - child - inadequate - improvement | 86 | 165_childrens_ofsted_child_inadequate |
| 166 | circulation - scotsman - print - newspaper - herald | 86 | 166_circulation_scotsman_print_newspaper |
| 167 | china - sea - philippines - chinas - island | 84 | 167_china_sea_philippines_chinas |
| 168 | lottery - jackpot - ticket - camelot - prize | 82 | 168_lottery_jackpot_ticket_camelot |
| 169 | tax - hmrc - avoidance - apple - google | 82 | 169_tax_hmrc_avoidance_apple |
| 170 | pollution - delhi - smog - air - beijing | 81 | 170_pollution_delhi_smog_air |
| 171 | charity - kids - batmanghelidjh - fundraising - yentob | 81 | 171_charity_kids_batmanghelidjh_fundraising |
| 172 | ride - smiler - alton - towers - merlin | 80 | 172_ride_smiler_alton_towers |
| 173 | mh370 - plane - debris - search - malaysian | 79 | 173_mh370_plane_debris_search |
| 174 | organ - transplant - donor - donation - kidney | 79 | 174_organ_transplant_donor_donation |
| 175 | smmt - car - psa - nissan - vauxhall | 79 | 175_smmt_car_psa_nissan |
| 176 | pistorius - steenkamp - dewani - reeva - masipa | 77 | 176_pistorius_steenkamp_dewani_reeva |
| 177 | nuclear - reactor - radiation - fukushima - plant | 77 | 177_nuclear_reactor_radiation_fukushima |
| 178 | music - spotify - streaming - album - apple | 76 | 178_music_spotify_streaming_album |
| 179 | iraq - blair - chilcot - inquiry - saddam | 76 | 179_iraq_blair_chilcot_inquiry |
| 180 | gap - gender - pay - maternity - woman | 76 | 180_gap_gender_pay_maternity |
| 181 | antisemitism - jewish - jews - israel - antisemitic | 76 | 181_antisemitism_jewish_jews_israel |
| 182 | strictly - dance - dancing - show - dancer | 76 | 182_strictly_dance_dancing_show |
| 183 | childcare - nursery - fouryearolds - parent - hour | 75 | 183_childcare_nursery_fouryearolds_parent |
| 184 | nba - lakers - cavaliers - warriors - curry | 75 | 184_nba_lakers_cavaliers_warriors |
| 185 | plane - sharm - elsheikh - egyptian - sinai | 74 | 185_plane_sharm_elsheikh_egyptian |
| 186 | norovirus - vomiting - ward - diarrhoea - bug | 74 | 186_norovirus_vomiting_ward_diarrhoea |
| 187 | mayor - devolution - combined - council - elected | 74 | 187_mayor_devolution_combined_council |
| 188 | fire - blaze - wildfire - fires - firefighter | 73 | 188_fire_blaze_wildfire_fires |
| 189 | berlusconi - renzi - berlusconis - italys - italian | 73 | 189_berlusconi_renzi_berlusconis_italys |
| 190 | yamaha - ducati - marquez - rossi - lorenzo | 73 | 190_yamaha_ducati_marquez_rossi |
| 191 | ice - antarctic - glacier - shelf - arctic | 71 | 191_ice_antarctic_glacier_shelf |
| 192 | giants - steelers - devils - panthers - desmarais | 70 | 192_giants_steelers_devils_panthers |
| 193 | disabled - disability - claimant - pip - benefit | 69 | 193_disabled_disability_claimant_pip |
| 194 | fan - marseille - france - stade - french | 68 | 194_fan_marseille_france_stade |
| 195 | pension - annuity - pot - pensions - retirement | 67 | 195_pension_annuity_pot_pensions |
| 196 | bangladesh - dhaka - jamaateislami - secular - bangladeshi | 67 | 196_bangladesh_dhaka_jamaateislami_secular |
| 197 | snow - avalanche - sais - ski - cairngorms | 66 | 197_snow_avalanche_sais_ski |
| 198 | afghanistan - helmand - lcpl - regiment - battalion | 66 | 198_afghanistan_helmand_lcpl_regiment |
| 199 | christmas - santa - halloween - toy - festive | 66 | 199_christmas_santa_halloween_toy |
| 200 | australian - australians - sharrouf - australia - sydney | 66 | 200_australian_australians_sharrouf_australia |
| 201 | assange - wikileaks - embassy - ecuador - assanges | 66 | 201_assange_wikileaks_embassy_ecuador |
| 202 | tv - fm - internetlivestatscom - medium - radio | 65 | 202_tv_fm_internetlivestatscom_medium |
| 203 | blast - explosion - tianjin - firework - fire | 65 | 203_blast_explosion_tianjin_firework |
| 204 | bee - bees - insect - honey - bumblebee | 64 | 204_bee_bees_insect_honey |
| 205 | pte - deepcut - inquest - jamess - 1995 | 64 | 205_pte_deepcut_inquest_jamess |
| 206 | driving - crash - causing - lorry - car | 64 | 206_driving_crash_causing_lorry |
| 207 | afd - cdu - merkels - merkel - kohl | 63 | 207_afd_cdu_merkels_merkel |
| 208 | cladding - tower - grenfell - fire - sprinkler | 63 | 208_cladding_tower_grenfell_fire |
| 209 | antibiotic - bacteria - antibiotics - resistance - infection | 62 | 209_antibiotic_bacteria_antibiotics_resistance |
| 210 | hibs - rangers - pitch - fan - sfa | 62 | 210_hibs_rangers_pitch_fan |
| 211 | abuse - exploitation - sexual - child - domestic | 61 | 211_abuse_exploitation_sexual_child |
| 212 | choi - park - ms - soonsil - parks | 61 | 212_choi_park_ms_soonsil |
| 213 | uber - driver - ubers - kalanick - drivers | 61 | 213_uber_driver_ubers_kalanick |
| 214 | alcohol - drinking - drink - liver - wine | 61 | 214_alcohol_drinking_drink_liver |
| 215 | transgender - marriage - samesex - gay - law | 60 | 215_transgender_marriage_samesex_gay |
| 216 | armstrong - wiggins - uci - cycling - tues | 60 | 216_armstrong_wiggins_uci_cycling |
| 217 | argentina - brazil - netherlands - rica - messi | 59 | 217_argentina_brazil_netherlands_rica |
| 218 | unite - oca - offshore - union - industrial | 59 | 218_unite_oca_offshore_union |
| 219 | trudeau - harper - canadian - canada - ndp | 59 | 219_trudeau_harper_canadian_canada |
| 220 | bullying - turing - sexual - harassment - antibullying | 59 | 220_bullying_turing_sexual_harassment |
| 221 | mine - miner - fyfield - mining - underground | 58 | 221_mine_miner_fyfield_mining |
| 222 | hiv - prep - antiretroviral - virus - aids | 58 | 222_hiv_prep_antiretroviral_virus |
| 223 | care - carers - social - wage - nhs | 57 | 223_care_carers_social_wage |
| 224 | graphene - prize - nobel - prof - material | 57 | 224_graphene_prize_nobel_prof |
| 225 | education - aid - unesco - school - primary | 57 | 225_education_aid_unesco_school |
| 226 | execution - lethal - injection - inmate - drug | 56 | 226_execution_lethal_injection_inmate |
| 227 | raf - concorde - aircraft - f35 - mildenhall | 56 | 227_raf_concorde_aircraft_f35 |
| 228 | fashion - vogue - designer - dress - playboy | 56 | 228_fashion_vogue_designer_dress |
| 229 | wrexham - keates - substitution - ormerod - morrell | 56 | 229_wrexham_keates_substitution_ormerod |
| 230 | limit - alcohol - drinkdrive - driving - drinkdriving | 55 | 230_limit_alcohol_drinkdrive_driving |
| 231 | orchestra - music - concert - musician - proms | 55 | 231_orchestra_music_concert_musician |
| 232 | nfl - patriots - quarterback - brady - touchdown | 54 | 232_nfl_patriots_quarterback_brady |
| 233 | uefa - cardiff - faw - ticket - stadium | 54 | 233_uefa_cardiff_faw_ticket |
| 234 | pier - piers - structure - conwy - colwyn | 53 | 234_pier_piers_structure_conwy |
| 235 | productivity - ons - unemployment - wage - growth | 52 | 235_productivity_ons_unemployment_wage |
| 236 | swim - swimming - severn - swimmer - mile | 52 | 236_swim_swimming_severn_swimmer |
| 237 | pupil - panel - teacher - teaching - conduct | 52 | 237_pupil_panel_teacher_teaching |
| 238 | energy - solar - renewables - renewable - climate | 52 | 238_energy_solar_renewables_renewable |
| 239 | prediction - lawros - lawro - correct - score | 52 | 239_prediction_lawros_lawro_correct |
| 240 | turnberry - golf - trump - menie - beyts | 52 | 240_turnberry_golf_trump_menie |
| 241 | trafficking - slavery - trafficked - victim - exploitation | 51 | 241_trafficking_slavery_trafficked_victim |
| 242 | contactless - payment - card - customer - cash | 51 | 242_contactless_payment_card_customer |
| 243 | school - pupil - pupils - teacher - police | 51 | 243_school_pupil_pupils_teacher |
| 244 | trident - nuclear - submarine - renewal - deterrent | 50 | 244_trident_nuclear_submarine_renewal |
| 245 | licence - bbc - charter - fee - bbcs | 50 | 245_licence_bbc_charter_fee |
| 246 | malaria - parasite - mosquito - vaccine - artemisinin | 50 | 246_malaria_parasite_mosquito_vaccine |
| 247 | burkini - veil - ban - burka - muslim | 50 | 247_burkini_veil_ban_burka |
| 248 | chinas - growth - yuan - china - currency | 50 | 248_chinas_growth_yuan_china |
| 249 | poverty - income - child - household - living | 49 | 249_poverty_income_child_household |
| 250 | science - research - ukri - funding - scientific | 49 | 250_science_research_ukri_funding |
| 251 | eurovision - song - contest - entry - ballad | 49 | 251_eurovision_song_contest_entry |
| 252 | library - libraries - council - book - bookless | 48 | 252_library_libraries_council_book |
| 253 | copyright - dotcom - piracy - content - pirated | 48 | 253_copyright_dotcom_piracy_content |
| 254 | bbcscotlandpics - scotlandpicturesbbccouk - selection - photo - instagram | 48 | 254_bbcscotlandpics_scotlandpicturesbbccouk_selection_photo |
| 255 | ira - disappeared - megraw - buried - iclvr | 47 | 255_ira_disappeared_megraw_buried |
| 256 | juventus - napoli - roma - lazio - genoa | 47 | 256_juventus_napoli_roma_lazio |
| 257 | cosby - constand - cosbys - deposition - comedian | 47 | 257_cosby_constand_cosbys_deposition |
| 258 | women - woman - 100women - feminist - 100 | 47 | 258_women_woman_100women_feminist |
| 259 | parking - badge - council - car - 20mph | 47 | 259_parking_badge_council_car |
| 260 | concussion - rugby - head - injury - protocol | 47 | 260_concussion_rugby_head_injury |
| 261 | bhs - philip - chappell - pension - sir | 47 | 261_bhs_philip_chappell_pension |
| 262 | aluko - uefa - terry - fa - ferdinand | 47 | 262_aluko_uefa_terry_fa |
| 263 | hutch - feud - dublin - garda - regency | 46 | 263_hutch_feud_dublin_garda |
| 264 | bradley - neuroblastoma - lowery - bradleys - blackhall | 46 | 264_bradley_neuroblastoma_lowery_bradleys |
| 265 | legal - aid - justice - magistrates - court | 46 | 265_legal_aid_justice_magistrates |
| 266 | unison - pay - cordia - strike - janitor | 46 | 266_unison_pay_cordia_strike |
| 267 | indonesia - bali - sukumaran - execution - indonesian | 46 | 267_indonesia_bali_sukumaran_execution |
| 268 | suicide - dying - nicklinson - terminally - law | 46 | 268_suicide_dying_nicklinson_terminally |
| 269 | balloon - helium - konyukhov - nightglow - balloons | 46 | 269_balloon_helium_konyukhov_nightglow |
| 270 | volcano - eruption - ash - volcanic - lava | 46 | 270_volcano_eruption_ash_volcanic |
| 271 | wanda - chinese - disney - hollywood - movie | 46 | 271_wanda_chinese_disney_hollywood |
| 272 | duterte - philippines - davao - marcos - dutertes | 46 | 272_duterte_philippines_davao_marcos |
| 273 | yorkshire - tour - depart - cycling - de | 45 | 273_yorkshire_tour_depart_cycling |
| 274 | note - polymer - banknote - bank - notes | 45 | 274_note_polymer_banknote_bank |
| 275 | xinjiang - uighur - uighurs - chinese - kashgar | 45 | 275_xinjiang_uighur_uighurs_chinese |
| 276 | foster - carers - mental - child - care | 45 | 276_foster_carers_mental_child |
| 277 | bitcoin - bitcoins - mtgox - currency - virtual | 45 | 277_bitcoin_bitcoins_mtgox_currency |
| 278 | mortgage - lending - cml - lender - buytolet | 45 | 278_mortgage_lending_cml_lender |
| 279 | campus - college - university - student - building | 45 | 279_campus_college_university_student |
| 280 | depression - breastfeeding - baby - birth - infant | 44 | 280_depression_breastfeeding_baby_birth |
| 281 | unaccompanied - dubs - child - refugee - calais | 44 | 281_unaccompanied_dubs_child_refugee |
| 282 | charlies - charlie - gard - ormond - yates | 44 | 282_charlies_charlie_gard_ormond |
| 283 | inquest - coroner - hospital - embolism - mrs | 44 | 283_inquest_coroner_hospital_embolism |
| 284 | swans - swansea - clement - guidolin - swanseas | 44 | 284_swans_swansea_clement_guidolin |
| 285 | defence - army - reservist - nato - spending | 44 | 285_defence_army_reservist_nato |
| 286 | language - welsh - huws - meri - bilingual | 44 | 286_language_welsh_huws_meri |
| 287 | content - facebook - reddit - video - user | 44 | 287_content_facebook_reddit_video |
| 288 | race - yacht - thomson - cleach - sailing | 44 | 288_race_yacht_thomson_cleach |
| 289 | growth - sector - scottish - quarter - output | 44 | 289_growth_sector_scottish_quarter |
| 290 | queensland - cyclone - snow - weather - storm | 44 | 290_queensland_cyclone_snow_weather |
| 291 | witheridge - thai - koh - tao - zaw | 44 | 291_witheridge_thai_koh_tao |
| 292 | firearm - shooting - incident - jeffers - man | 43 | 292_firearm_shooting_incident_jeffers |
| 293 | breivik - utoeya - breiviks - oslo - norway | 43 | 293_breivik_utoeya_breiviks_oslo |
| 294 | castle - heritage - building - riba - ruffer | 43 | 294_castle_heritage_building_riba |
| 295 | chapecoense - medellin - sudamericana - plane - brazilian | 43 | 295_chapecoense_medellin_sudamericana_plane |
| 296 | whyte - rangers - ticketus - whytes - withey | 43 | 296_whyte_rangers_ticketus_whytes |
| 297 | bird - poultry - avian - flu - h5n8 | 43 | 297_bird_poultry_avian_flu |
| 298 | crematorium - ash - cremation - mortonhall - cremated | 43 | 298_crematorium_ash_cremation_mortonhall |
| 299 | bike - cycling - cycle - cyclist - hire | 43 | 299_bike_cycling_cycle_cyclist |
| 300 | syria - iraq - air - strike - military | 43 | 300_syria_iraq_air_strike |
| 301 | sykes - funeral - sister - fr - nell | 43 | 301_sykes_funeral_sister_fr |
| 302 | glazer - revenue - deloitte - premier - glazers | 43 | 302_glazer_revenue_deloitte_premier |
| 303 | dam - samarco - bhp - mud - mining | 43 | 303_dam_samarco_bhp_mud |
| 304 | fgm - genital - girl - mutilation - female | 43 | 304_fgm_genital_girl_mutilation |
| 305 | bonfire - bonfires - injunction - lit - effigy | 42 | 305_bonfire_bonfires_injunction_lit |
| 306 | madeleine - mccann - madeleines - portuguese - praia | 42 | 306_madeleine_mccann_madeleines_portuguese |
| 307 | ant - robot - ants - insect - robots | 42 | 307_ant_robot_ants_insect |
| 308 | badger - tb - cull - cattle - culling | 42 | 308_badger_tb_cull_cattle |
| 309 | coal - colliery - kellingley - mine - pit | 42 | 309_coal_colliery_kellingley_mine |
| 310 | ferry - ship - sewol - boat - sank | 42 | 310_ferry_ship_sewol_boat |
| 311 | cheese - coli - errington - outbreak - o157 | 42 | 311_cheese_coli_errington_outbreak |
| 312 | borrowing - obr - forecast - deficit - budget | 42 | 312_borrowing_obr_forecast_deficit |
| 313 | twitter - account - propaganda - isis - content | 41 | 313_twitter_account_propaganda_isis |
| 314 | mental - health - camhs - disorder - nhs | 41 | 314_mental_health_camhs_disorder |
| 315 | casement - gaa - dcal - stadium - ni | 41 | 315_casement_gaa_dcal_stadium |
| 316 | fraud - scam - fraudsters - cifas - bank | 41 | 316_fraud_scam_fraudsters_cifas |
| 317 | ew - ewa - cricket - guptill - surrey | 41 | 317_ew_ewa_cricket_guptill |
| 318 | reid - hewett - peifer - houdet - whiley | 41 | 318_reid_hewett_peifer_houdet |
| 319 | 1916 - irish - dublin - rising - easter | 41 | 319_1916_irish_dublin_rising |
| 320 | mossack - fonseca - panama - offshore - papers | 41 | 320_mossack_fonseca_panama_offshore |
| 321 | ico - nuisance - call - calls - tps | 40 | 321_ico_nuisance_call_calls |
| 322 | rupee - cash - note - india - indians | 40 | 322_rupee_cash_note_india |
| 323 | tamil - sri - rajapaksa - sirisena - lankan | 40 | 323_tamil_sri_rajapaksa_sirisena |
| 324 | inquest - hospital - seans - care - ward | 39 | 324_inquest_hospital_seans_care |
| 325 | nafta - trade - lumber - mexico - canada | 39 | 325_nafta_trade_lumber_mexico |
| 326 | mafia - rancadore - ndrangheta - italian - riina | 39 | 326_mafia_rancadore_ndrangheta_italian |
| 327 | food - trussell - bank - welfare - gurr | 38 | 327_food_trussell_bank_welfare |
| 328 | tree - oak - trees - woodland - aspen | 38 | 328_tree_oak_trees_woodland |
| 329 | falklands - falkland - argentine - argentina - islands | 38 | 329_falklands_falkland_argentine_argentina |
| 330 | linfield - crusaders - cliftonville - glenavon - ballymena | 38 | 330_linfield_crusaders_cliftonville_glenavon |
| 331 | storm - texas - houston - tornado - hurricane | 38 | 331_storm_texas_houston_tornado |
| 332 | mh17 - buk - missile - ukraine - dutch | 38 | 332_mh17_buk_missile_ukraine |
| 333 | wigan - wolverhampton - league - wanderers - club | 38 | 333_wigan_wolverhampton_league_wanderers |
| 334 | injunction - hacking - mirror - privacy - phonehacking | 38 | 334_injunction_hacking_mirror_privacy |
| 335 | ecstasy - drug - mdma - heroin - drugs | 37 | 335_ecstasy_drug_mdma_heroin |
| 336 | pool - lido - swimming - leisure - afan | 37 | 336_pool_lido_swimming_leisure |
| 337 | shahid - shahids - shakeel - chaudhry - samia | 37 | 337_shahid_shahids_shakeel_chaudhry |
| 338 | gambling - betting - casino - fobts - machine | 37 | 338_gambling_betting_casino_fobts |
| 339 | pay - shareholder - remuneration - executive - wpp | 37 | 339_pay_shareholder_remuneration_executive |
| 340 | bell - minster - imber - bellringers - ringer | 37 | 340_bell_minster_imber_bellringers |
| 341 | 1500 - gmt - city - albion - middlesbrough | 37 | 341_1500_gmt_city_albion |
| 342 | iii - richard - king - bosworth - 1485 | 37 | 342_iii_richard_king_bosworth |
| 343 | sayyaf - philippines - mindanao - abu - marawi | 36 | 343_sayyaf_philippines_mindanao_abu |
| 344 | clarkson - gear - hammond - tymon - leblanc | 36 | 344_clarkson_gear_hammond_tymon |
| 345 | horsemeat - beef - meat - findus - product | 36 | 345_horsemeat_beef_meat_findus |
| 346 | gb - richardsonwalsh - hinch - hockey - gbs | 36 | 346_gb_richardsonwalsh_hinch_hockey |
| 347 | obamacare - republicans - insurance - senate - healthcare | 35 | 347_obamacare_republicans_insurance_senate |
| 348 | meldonium - sharapova - sharapovas - itf - tennis | 35 | 348_meldonium_sharapova_sharapovas_itf |
| 349 | hickey - oci - thg - mallon - olympic | 35 | 349_hickey_oci_thg_mallon |
| 350 | javeed - murder - mistry - zarif - aamir | 35 | 350_javeed_murder_mistry_zarif |
| 351 | takata - airbags - inflator - airbag - recall | 35 | 351_takata_airbags_inflator_airbag |
| 352 | lubitz - cockpit - germanwings - lufthansa - copilot | 35 | 352_lubitz_cockpit_germanwings_lufthansa |
| 353 | percy - trust - sparrowhawk - connor - southern | 35 | 353_percy_trust_sparrowhawk_connor |
| 354 | martelly - haiti - moise - haitis - celestin | 35 | 354_martelly_haiti_moise_haitis |
| 355 | fox - murdoch - rupert - murdochs - sky | 35 | 355_fox_murdoch_rupert_murdochs |
| 356 | pirate - piracy - somali - pirates - vessel | 35 | 356_pirate_piracy_somali_pirates |
| 357 | rea - sykes - kawasaki - rider - chaz | 35 | 357_rea_sykes_kawasaki_rider |
| 358 | mbe - honorary - service - honour - obe | 35 | 358_mbe_honorary_service_honour |
| 359 | dementia - alzheimers - diagnosis - disease - care | 34 | 359_dementia_alzheimers_diagnosis_disease |
| 360 | hate - crime - disability - racist - victim | 34 | 360_hate_crime_disability_racist |
| 361 | lcpl - cpl - dunsby - maher - mod | 34 | 361_lcpl_cpl_dunsby_maher |
| 362 | cologne - asylum - german - germany - merkel | 34 | 362_cologne_asylum_german_germany |
| 363 | toshiba - toshibas - foxconn - westinghouse - yen | 34 | 363_toshiba_toshibas_foxconn_westinghouse |
| 364 | poland - pis - polands - polish - duda | 34 | 364_poland_pis_polands_polish |
| 365 | ban - visa - trumps - order - refugee | 34 | 365_ban_visa_trumps_order |
| 366 | gmb - unite - ucatt - union - refuse | 34 | 366_gmb_unite_ucatt_union |
| 367 | uber - taxi - hire - tfl - cab | 34 | 367_uber_taxi_hire_tfl |
| 368 | ladies - chelsea - birmingham - manchester - women | 34 | 368_ladies_chelsea_birmingham_manchester |
| 369 | rig - transocean - dalmore - salvage - towed | 34 | 369_rig_transocean_dalmore_salvage |
| 370 | s4c - s4cs - language - welsh - channel | 34 | 370_s4c_s4cs_language_welsh |
| 371 | clown - clowns - craze - creepy - dressed | 34 | 371_clown_clowns_craze_creepy |
| 372 | garda - mccabe - sochna - callinan - osullivan | 34 | 372_garda_mccabe_sochna_callinan |
| 373 | pogba - juventus - mendy - matic - club | 33 | 373_pogba_juventus_mendy_matic |
| 374 | funeral - cremation - burial - cost - crematorium | 33 | 374_funeral_cremation_burial_cost |
| 375 | qatar - uae - saudi - qatars - qatari | 33 | 375_qatar_uae_saudi_qatars |
| 376 | rhi - foster - scheme - renewable - arlene | 33 | 376_rhi_foster_scheme_renewable |
| 377 | ford - sale - motors - toyota - gm | 33 | 377_ford_sale_motors_toyota |
| 378 | fire - fbu - firefighter - brigades - brigade | 33 | 378_fire_fbu_firefighter_brigades |
| 379 | dvla - taxi - driver - licence - licensing | 33 | 379_dvla_taxi_driver_licence |
| 380 | bale - bales - madrid - belgium - zidane | 33 | 380_bale_bales_madrid_belgium |
| 381 | condor - ferry - poole - guernsey - liberation | 32 | 381_condor_ferry_poole_guernsey |
| 382 | massaro - willstrop - 115 - 117 - matthew | 32 | 382_massaro_willstrop_115_117 |
| 383 | ilott - judge - mother - haringey - boy | 32 | 383_ilott_judge_mother_haringey |
| 384 | mckeague - corrie - edmunds - suffolk - urquhart | 32 | 384_mckeague_corrie_edmunds_suffolk |
| 385 | tpp - trade - wto - agreement - deal | 32 | 385_tpp_trade_wto_agreement |
| 386 | fake - facebook - trending - news - facebooks | 32 | 386_fake_facebook_trending_news |
| 387 | lighting - light - leap - clock - bulb | 32 | 387_lighting_light_leap_clock |
| 388 | wilders - rutte - vvd - dutch - pvv | 32 | 388_wilders_rutte_vvd_dutch |
| 389 | temperature - weather - rainfall - rain - recorded | 32 | 389_temperature_weather_rainfall_rain |
| 390 | palmyra - ancient - antiquity - syrian - ruin | 32 | 390_palmyra_ancient_antiquity_syrian |
| 391 | tesla - musk - electric - teslas - model | 32 | 391_tesla_musk_electric_teslas |
| 392 | evans - johnson - sexual - sunderland - mcdonald | 31 | 392_evans_johnson_sexual_sunderland |
| 393 | halawa - halawas - egyptian - ibrahim - alfath | 31 | 393_halawa_halawas_egyptian_ibrahim |
| 394 | rahman - hamlets - lutfur - tower - mawrey | 31 | 394_rahman_hamlets_lutfur_tower |
| 395 | ashley - sports - shirebrook - direct - hellawell | 31 | 395_ashley_sports_shirebrook_direct |
| 396 | submarine - trident - nuclear - submarines - faslane | 31 | 396_submarine_trident_nuclear_submarines |
| 397 | africa - african - africas - china - chinese | 31 | 397_africa_african_africas_china |
| 398 | hepatitis - blood - infected - hiv - transfusion | 31 | 398_hepatitis_blood_infected_hiv |
| 399 | charlottesville - supremacist - white - statue - confederate | 31 | 399_charlottesville_supremacist_white_statue |
| 400 | scam - fraud - scams - scammer - victim | 31 | 400_scam_fraud_scams_scammer |
| 401 | water - dee - trent - severn - customer | 30 | 401_water_dee_trent_severn |
| 402 | meal - school - lunch - meals - child | 30 | 402_meal_school_lunch_meals |
| 403 | mexico - mexican - trump - immigration - immigrant | 30 | 403_mexico_mexican_trump_immigration |
| 404 | piccard - impulse - leg - borschberg - solar | 30 | 404_piccard_impulse_leg_borschberg |
| 405 | alcohol - liquor - bihar - drinking - methanol | 30 | 405_alcohol_liquor_bihar_drinking |
| 406 | thailand - myanmar - rohingya - migrant - malaysia | 30 | 406_thailand_myanmar_rohingya_migrant |
| 407 | ash - tree - dieback - fungus - juniper | 30 | 407_ash_tree_dieback_fungus |
| 408 | apprenticeship - apprenticeships - employer - apprentice - skills | 30 | 408_apprenticeship_apprenticeships_employer_apprentice |
| 409 | mers - virus - camel - coronavirus - respiratory | 30 | 409_mers_virus_camel_coronavirus |
| 410 | howard - arlene - arkinson - castlederg - arlenes | 30 | 410_howard_arlene_arkinson_castlederg |
| 411 | spying - nsa - intelligence - spy - merkel | 30 | 411_spying_nsa_intelligence_spy |
| 412 | gay - homosexuality - homosexual - samesex - lgbt | 29 | 412_gay_homosexuality_homosexual_samesex |
| 413 | rio - olympic - games - olympics - brazil | 29 | 413_rio_olympic_games_olympics |
| 414 | evans - sheffield - ched - oldham - club | 29 | 414_evans_sheffield_ched_oldham |
| 415 | poppy - fifa - armband - wear - fifas | 29 | 415_poppy_fifa_armband_wear |
| 416 | cow - beef - slaughter - hindu - meat | 29 | 416_cow_beef_slaughter_hindu |
| 417 | mcevoy - adjudication - plaid - councillor - tribunal | 28 | 417_mcevoy_adjudication_plaid_councillor |
| 418 | bag - 5p - waste - pig - plastic | 28 | 418_bag_5p_waste_pig |
| 419 | insurance - premium - whiplash - insurer - abi | 28 | 419_insurance_premium_whiplash_insurer |
| 420 | hut - camping - loch - park - mooring | 28 | 420_hut_camping_loch_park |
| 421 | boaty - ocean - sub - mcboatface - polar | 28 | 421_boaty_ocean_sub_mcboatface |
| 422 | teff - farmer - crop - agriculture - meat | 28 | 422_teff_farmer_crop_agriculture |
| 423 | homeless - homelessness - rough - housing - shelter | 28 | 423_homeless_homelessness_rough_housing |
| 424 | crofting - crofter - grazing - crofters - commission | 28 | 424_crofting_crofter_grazing_crofters |
| 425 | muamba - fabrice - cardiac - defibrillator - muambas | 27 | 425_muamba_fabrice_cardiac_defibrillator |
| 426 | ariana - concert - grande - manchester - arena | 27 | 426_ariana_concert_grande_manchester |
| 427 | tick - rabies - lyme - disease - dog | 27 | 427_tick_rabies_lyme_disease |
| 428 | begley - taser - adunbi - hegarty - curnow | 27 | 428_begley_taser_adunbi_hegarty |
| 429 | museum - gallery - visitor - tate - exhibition | 27 | 429_museum_gallery_visitor_tate |
| 430 | woman - gender - science - stem - female | 27 | 430_woman_gender_science_stem |
| 431 | pokemon - niantic - augmented - gos - pokestops | 27 | 431_pokemon_niantic_augmented_gos |
| 432 | tech - specialisms - foreignowned - headquartered - logo | 27 | 432_tech_specialisms_foreignowned_headquartered |
| 433 | bp - spill - deepwater - oil - rig | 27 | 433_bp_spill_deepwater_oil |
| 434 | poppi - worthington - poppis - cumbria - inquest | 27 | 434_poppi_worthington_poppis_cumbria |
| 435 | cqc - care - inspection - resident - inspectorate | 26 | 435_cqc_care_inspection_resident |
| 436 | zika - golf - mcilroy - olympics - virus | 26 | 436_zika_golf_mcilroy_olympics |
| 437 | mine - platinum - marikana - halo - mines | 26 | 437_mine_platinum_marikana_halo |
| 438 | shark - beach - sharks - fanning - surfer | 26 | 438_shark_beach_sharks_fanning |
| 439 | fitbit - watch - wearable - smartwatch - apple | 26 | 439_fitbit_watch_wearable_smartwatch |
| 440 | lego - legos - toy - wars - brick | 26 | 440_lego_legos_toy_wars |
| 441 | japans - abenomics - yen - japan - stimulus | 26 | 441_japans_abenomics_yen_japan |
| 442 | butterfly - specie - frog - fungus - species | 26 | 442_butterfly_specie_frog_fungus |
| 443 | gay - fashanu - hitzlsperger - footballer - homophobia | 26 | 443_gay_fashanu_hitzlsperger_footballer |
| 444 | paterson - spire - mastectomy - breast - lump | 25 | 444_paterson_spire_mastectomy_breast |
| 445 | visit - trump - petition - trumps - ban | 25 | 445_visit_trump_petition_trumps |
| 446 | famine - drought - sudan - somalia - aid | 25 | 446_famine_drought_sudan_somalia |
| 447 | expectancy - mortality - ageing - age - ons | 25 | 447_expectancy_mortality_ageing_age |
| 448 | anglo - irish - fitzpatrick - bank - bailout | 25 | 448_anglo_irish_fitzpatrick_bank |
| 449 | coulter - chhokar - ronnie - ebrahimi - chhokars | 25 | 449_coulter_chhokar_ronnie_ebrahimi |
| 450 | brittan - abuse - allegation - lord - dickens | 25 | 450_brittan_abuse_allegation_lord |
| 451 | pharmacy - patient - nhs - dental - dentistry | 25 | 451_pharmacy_patient_nhs_dental |
| 452 | pipeline - keystone - xl - oil - alberta | 25 | 452_pipeline_keystone_xl_oil |
| 453 | employment - unemployment - rate - permanent - temporary | 25 | 453_employment_unemployment_rate_permanent |
| 454 | seal - pup - horsey - seals - grey | 25 | 454_seal_pup_horsey_seals |
| 455 | rally - snowman - spectator - provan - clark | 25 | 455_rally_snowman_spectator_provan |
| 456 | nasheed - maldives - yameen - adeeb - nasheeds | 25 | 456_nasheed_maldives_yameen_adeeb |
| 457 | picture - please - submit - pictures - publish | 25 | 457_picture_please_submit_pictures |
| 458 | refugee - syrians - jordan - jordanian - camp | 25 | 458_refugee_syrians_jordan_jordanian |
| 459 | abortion - clinic - texas - abortions - parenthood | 25 | 459_abortion_clinic_texas_abortions |
| 460 | orban - ceu - soros - hungarian - hungary | 25 | 460_orban_ceu_soros_hungarian |
| 461 | qatar - worker - qatars - workers - amnesty | 25 | 461_qatar_worker_qatars_workers |
| 462 | coulson - mulcaire - hacking - wallis - goodman | 24 | 462_coulson_mulcaire_hacking_wallis |
| 463 | pitch - fixture - postponed - rain - unplayable | 24 | 463_pitch_fixture_postponed_rain |
| 464 | sinkhole - hole - fontmell - floridas - crater | 24 | 464_sinkhole_hole_fontmell_floridas |
| 465 | didcot - demolition - huxtable - rwe - npower | 24 | 465_didcot_demolition_huxtable_rwe |
| 466 | argentina - hedge - defaulted - argentinas - bondholder | 24 | 466_argentina_hedge_defaulted_argentinas |
| 467 | caf - hayatou - ahmad - nff - pinnick | 24 | 467_caf_hayatou_ahmad_nff |
| 468 | pipeline - dakota - sioux - tribe - native | 24 | 468_pipeline_dakota_sioux_tribe |
| 469 | ceta - wallonia - trade - ttip - walloon | 24 | 469_ceta_wallonia_trade_ttip |
| 470 | screen - internet - online - parent - tablet | 24 | 470_screen_internet_online_parent |
| 471 | extremism - extremist - muslim - prevent - radicalisation | 24 | 471_extremism_extremist_muslim_prevent |
| 472 | blackman - marine - blackmans - marines - martial | 24 | 472_blackman_marine_blackmans_marines |
| 473 | cubs - baseball - curse - pitcher - series | 24 | 473_cubs_baseball_curse_pitcher |
| 474 | ira - sinn - mcguigan - fin - provisional | 24 | 474_ira_sinn_mcguigan_fin |
| 475 | cocaine - makayabella - hamal - nca - drug | 24 | 475_cocaine_makayabella_hamal_nca |
| 476 | mask - edl - protester - protest - shenstone | 24 | 476_mask_edl_protester_protest |
| 477 | crime - recorded - rape - offence - shoplifting | 24 | 477_crime_recorded_rape_offence |
| 478 | sex - prostitution - prostitute - trafficking - morrow | 24 | 478_sex_prostitution_prostitute_trafficking |
| 479 | shepherd - christi - cook - thomas - fankhauser | 24 | 479_shepherd_christi_cook_thomas |
| 480 | ipsa - mps - expense - mp - salary | 23 | 480_ipsa_mps_expense_mp |
| 481 | bake - channel - baking - cake - toksvig | 23 | 481_bake_channel_baking_cake |
| 482 | church - abuse - bishop - archbishop - safeguarding | 23 | 482_church_abuse_bishop_archbishop |
| 483 | expedition - antarctic - polar - arctic - ice | 23 | 483_expedition_antarctic_polar_arctic |
| 484 | warnock - bluebirds - cardiff - manga - trollope | 23 | 484_warnock_bluebirds_cardiff_manga |
| 485 | islands - island - bougainville - palau - tuvalu | 23 | 485_islands_island_bougainville_palau |
| 486 | airline - flight - ban - laptop - airlines | 23 | 486_airline_flight_ban_laptop |
| 487 | ponta - decree - romania - bucharest - romanian | 23 | 487_ponta_decree_romania_bucharest |
| 488 | manning - pte - mannings - wikileaks - leavenworth | 23 | 488_manning_pte_mannings_wikileaks |
| 489 | hmrc - rangers - tax - ebts - ebt | 23 | 489_hmrc_rangers_tax_ebts |
| 490 | car - vehicle - remotely - cars - valasek | 23 | 490_car_vehicle_remotely_cars |
| 491 | thistle - ayr - hearts - dundee - rangers | 23 | 491_thistle_ayr_hearts_dundee |
| 492 | mail - parcel - mails - royal - whistl | 23 | 492_mail_parcel_mails_royal |
| 493 | villa - aston - newcastle - kodjia - bromwich | 22 | 493_villa_aston_newcastle_kodjia |
| 494 | tree - felling - trees - diseased - sheffield | 22 | 494_tree_felling_trees_diseased |
| 495 | book - amazon - ebook - nook - kindle | 22 | 495_book_amazon_ebook_nook |
| 496 | utilities - cryptosporidium - water - parasite - ribble | 22 | 496_utilities_cryptosporidium_water_parasite |
| 497 | foi - information - request - cabinet - commissioners | 22 | 497_foi_information_request_cabinet |
| 498 | cyclist - hgvs - cycling - road - lorry | 22 | 498_cyclist_hgvs_cycling_road |
| 499 | happiness - wellbeing - happier - happiest - satisfaction | 22 | 499_happiness_wellbeing_happier_happiest |
| 500 | baby - born - twin - birth - twins | 22 | 500_baby_born_twin_birth |
| 501 | camper - indycamp - camp - parliament - spcb | 22 | 501_camper_indycamp_camp_parliament |
| 502 | malala - malalas - yousafzai - pakistan - swat | 22 | 502_malala_malalas_yousafzai_pakistan |
| 503 | parking - beavis - parkingeye - car - motorist | 22 | 503_parking_beavis_parkingeye_car |
| 504 | polio - vaccine - vaccination - virus - leprosy | 22 | 504_polio_vaccine_vaccination_virus |
| 505 | yuill - lamara - m9 - bell - pirc | 22 | 505_yuill_lamara_m9_bell |
| 506 | libor - hayes - trader - rate - ubs | 22 | 506_libor_hayes_trader_rate |
| 507 | bismarck - jutland - hms - battle - fleet | 22 | 507_bismarck_jutland_hms_battle |
| 508 | baby - blane - towel - mother - babys | 22 | 508_baby_blane_towel_mother |
| 509 | abortion - pregnancy - salvador - foetus - el | 22 | 509_abortion_pregnancy_salvador_foetus |
| 510 | cardinal - pell - ridsdale - ballarat - priest | 21 | 510_cardinal_pell_ridsdale_ballarat |
| 511 | campus - amherst - university - student - mascot | 21 | 511_campus_amherst_university_student |
| 512 | ashers - cake - mcarthur - equality - bakery | 21 | 512_ashers_cake_mcarthur_equality |
| 513 | bull - bullfighting - jallikattu - gored - tamil | 21 | 513_bull_bullfighting_jallikattu_gored |
| 514 | quantum - qubits - photon - computing - computer | 21 | 514_quantum_qubits_photon_computing |
| 515 | hinkley - edf - nuclear - energy - cgn | 21 | 515_hinkley_edf_nuclear_energy |
| 516 | sectarianism - legislation - repeal - act - behaviour | 21 | 516_sectarianism_legislation_repeal_act |
| 517 | tsarnaev - tamerlan - dzhokhar - boston - tsarnaevs | 21 | 517_tsarnaev_tamerlan_dzhokhar_boston |
| 518 | gay - samesex - marriage - uruguay - legalised | 21 | 518_gay_samesex_marriage_uruguay |
| 519 | gaal - slegers - wenger - van - psv | 21 | 519_gaal_slegers_wenger_van |
| 520 | barclays - bank - sfo - libor - ubs | 21 | 520_barclays_bank_sfo_libor |
| 521 | gender - transgender - trans - intersex - hormone | 21 | 521_gender_transgender_trans_intersex |
| 522 | crompton - billings - hillsborough - cromptons - inquest | 21 | 522_crompton_billings_hillsborough_cromptons |
| 523 | jewellery - kardashian - robbery - jewel - container | 21 | 523_jewellery_kardashian_robbery_jewel |
| 524 | dryer - whirlpool - indesit - tumble - hotpoint | 21 | 524_dryer_whirlpool_indesit_tumble |
| 525 | unite - ba - cabin - airline - fleet | 21 | 525_unite_ba_cabin_airline |
| 526 | samsung - lotte - lee - choi - kunhee | 21 | 526_samsung_lotte_lee_choi |
| 527 | alsweady - ihat - iraqi - detainee - inquiry | 21 | 527_alsweady_ihat_iraqi_detainee |
| 528 | cholera - outbreak - gorongosa - haiti - sanitation | 20 | 528_cholera_outbreak_gorongosa_haiti |
| 529 | skirt - uniform - trouser - school - wear | 20 | 529_skirt_uniform_trouser_school |
| 530 | emojis - emoji - unicode - burge - emojipedia | 20 | 530_emojis_emoji_unicode_burge |
| 531 | alcohol - pricing - minimum - alcoholrelated - drink | 20 | 531_alcohol_pricing_minimum_alcoholrelated |
| 532 | 2022 - durban - games - commonwealth - cgf | 20 | 532_2022_durban_games_commonwealth |
| 533 | rhodes - statue - cape - student - uct | 20 | 533_rhodes_statue_cape_student |
| 534 | sleep - clock - mattress - sleeping - light | 20 | 534_sleep_clock_mattress_sleeping |
| 535 | bayoh - pirc - sheku - bayohs - anwar | 20 | 535_bayoh_pirc_sheku_bayohs |
| 536 | reef - coral - bleaching - reefs - unesco | 20 | 536_reef_coral_bleaching_reefs |
| 537 | simpsonkent - blake - amon - zachary - eastenders | 20 | 537_simpsonkent_blake_amon_zachary |
| 538 | sky - dark - lighting - light - pollution | 20 | 538_sky_dark_lighting_light |
| 539 | cav - safety - gutaj - hse - sofa | 20 | 539_cav_safety_gutaj_hse |
| 540 | kaepernick - anthem - 49ers - yall - kaepernicks | 20 | 540_kaepernick_anthem_49ers_yall |
| 541 | asylum - migrant - germany - seeker - german | 20 | 541_asylum_migrant_germany_seeker |
| 542 | cypriots - cyprus - cypriot - turkish - greek | 20 | 542_cypriots_cyprus_cypriot_turkish |
| 543 | 1mdb - najib - malaysia - malaysias - malaysian | 20 | 543_1mdb_najib_malaysia_malaysias |
| 544 | cardiff - street - arriva - queuing - stadium | 20 | 544_cardiff_street_arriva_queuing |
| 545 | dominica - grenada - kitts - jamaica - trinidad | 20 | 545_dominica_grenada_kitts_jamaica |
| 546 | divorce - chai - sharland - khoo - marriage | 20 | 546_divorce_chai_sharland_khoo |
| 547 | gladon - transfer - hornets - dedicated - page | 19 | 547_gladon_transfer_hornets_dedicated |
| 548 | megrahi - lockerbie - megrahis - libyan - bombing | 19 | 548_megrahi_lockerbie_megrahis_libyan |
| 549 | transgender - gay - military - scouts - erectile | 19 | 549_transgender_gay_military_scouts |
| 550 | janner - savile - cliff - allegation - jaconelli | 19 | 550_janner_savile_cliff_allegation |
| 551 | mueller - swift - swifts - muellers - skirt | 19 | 551_mueller_swift_swifts_muellers |
| 552 | payment - farmer - crofter - rural - nfu | 19 | 552_payment_farmer_crofter_rural |
| 553 | messi - messis - tax - defrauding - barcelona | 19 | 553_messi_messis_tax_defrauding |
| 554 | water - sewage - pollution - river - flushable | 19 | 554_water_sewage_pollution_river |
| 555 | bollywood - film - rajinikanth - kabali - indian | 19 | 555_bollywood_film_rajinikanth_kabali |
| 556 | gambling - betting - bet - barton - fa | 19 | 556_gambling_betting_bet_barton |
| 557 | alibaba - alibabas - taobao - ecommerce - online | 19 | 557_alibaba_alibabas_taobao_ecommerce |
| 558 | indian - antipiracy - machugh - advanfort - ship | 19 | 558_indian_antipiracy_machugh_advanfort |
| 559 | puma - helicopter - super - gearbox - grounded | 19 | 559_puma_helicopter_super_gearbox |
| 560 | astle - brain - nfl - concussion - cte | 19 | 560_astle_brain_nfl_concussion |
| 561 | bite - homeless - littlejohn - social - clooney | 19 | 561_bite_homeless_littlejohn_social |
| 562 | kyles - oban - newtonmore - camanachd - kingussie | 19 | 562_kyles_oban_newtonmore_camanachd |
| 563 | pspo - antisocial - pspos - highs - legal | 18 | 563_pspo_antisocial_pspos_highs |
| 564 | marriage - samesex - gay - civil - partnership | 18 | 564_marriage_samesex_gay_civil |
| 565 | kingsway - sgt - foss - lucas - bus | 18 | 565_kingsway_sgt_foss_lucas |
| 566 | mcareavey - michaela - masood - mauritius - cochran | 18 | 566_mcareavey_michaela_masood_mauritius |
| 567 | zerohours - contracts - contract - ons - flexibility | 18 | 567_zerohours_contracts_contract_ons |
| 568 | hanjin - shipping - cargo - container - ship | 18 | 568_hanjin_shipping_cargo_container |
| 569 | eurovision - jamala - ukraine - samoilova - crimea | 18 | 569_eurovision_jamala_ukraine_samoilova |
| 570 | clarke - shapps - cchq - bullying - feldman | 18 | 570_clarke_shapps_cchq_bullying |
| 571 | hie - enterprise - highlands - hies - islands | 18 | 571_hie_enterprise_highlands_hies |
| 572 | coin - mint - coins - design - circulation | 18 | 572_coin_mint_coins_design |
| 573 | port - kovari - whitworth - walgate - ports | 18 | 573_port_kovari_whitworth_walgate |
| 574 | depp - joyce - boo - pistol - quarantine | 18 | 574_depp_joyce_boo_pistol |
| 575 | indigenous - aboriginal - australians - australia - australian | 18 | 575_indigenous_aboriginal_australians_australia |
| 576 | universal - credit - benefit - claimant - duncan | 18 | 576_universal_credit_benefit_claimant |
| 577 | wemba - music - papa - congolese - musician | 18 | 577_wemba_music_papa_congolese |
| 578 | stephanie - inglis - vietnam - judo - daviot | 18 | 578_stephanie_inglis_vietnam_judo |
| 579 | bataclan - concert - band - paris - eagles | 18 | 579_bataclan_concert_band_paris |
| 580 | asa - advert - ad - advertising - adverts | 18 | 580_asa_advert_ad_advertising |
| 581 | daily - scotsman - courier - scottish - mail | 18 | 581_daily_scotsman_courier_scottish |
| 582 | leveson - press - charter - regulator - ipso | 18 | 582_leveson_press_charter_regulator |
| 583 | post - cwu - mail - branch - delungra | 17 | 583_post_cwu_mail_branch |
| 584 | diamond - carat - sapphire - sothebys - jewellery | 17 | 584_diamond_carat_sapphire_sothebys |
| 585 | robot - updated - robocup - robots - bst | 17 | 585_robot_updated_robocup_robots |
| 586 | laser - pilot - pilots - aircraft - cockpit | 17 | 586_laser_pilot_pilots_aircraft |
| 587 | indonesia - jakarta - indonesian - militant - naim | 17 | 587_indonesia_jakarta_indonesian_militant |
| 588 | chocolate - cadbury - nestle - toblerone - bar | 17 | 588_chocolate_cadbury_nestle_toblerone |
| 589 | burgess - rabbitohs - sydney - bath - nrl | 17 | 589_burgess_rabbitohs_sydney_bath |
| 590 | africans - selection - photo - elsewhere - africa | 17 | 590_africans_selection_photo_elsewhere |
| 591 | tax - avoidance - cameron - fink - blairmore | 17 | 591_tax_avoidance_cameron_fink |
| 592 | paper - belfast - telegraph - irish - primark | 17 | 592_paper_belfast_telegraph_irish |
| 593 | visa - 457 - h1b - budget - australia | 17 | 593_visa_457_h1b_budget |
| 594 | leaguebyleague - managerial - below - list - appear | 17 | 594_leaguebyleague_managerial_below_list |
| 595 | flag - fern - zealanders - design - zealand | 17 | 595_flag_fern_zealanders_design |
| 596 | driving - speeding - winn - speed - gibb | 17 | 596_driving_speeding_winn_speed |
| 597 | seeger - trump - song - trumps - springsteen | 17 | 597_seeger_trump_song_trumps |
| 598 | warrior - unmanned - joint - exercise - navy | 17 | 598_warrior_unmanned_joint_exercise |
| 599 | milk - fonterra - formula - infant - daigou | 17 | 599_milk_fonterra_formula_infant |
| 600 | flag - confederate - charleston - carolina - mississippi | 17 | 600_flag_confederate_charleston_carolina |
| 601 | regeni - egyptian - regenis - cairo - giulio | 17 | 601_regeni_egyptian_regenis_cairo |
| 602 | misconduct - pc - ipcc - munns - gross | 17 | 602_misconduct_pc_ipcc_munns |
| 603 | garment - factory - rana - plaza - bangladesh | 17 | 603_garment_factory_rana_plaza |
| 604 | nsi - bond - bonds - saver - rate | 17 | 604_nsi_bond_bonds_saver |
| 605 | sweat - escape - cuomo - prison - dannemora | 17 | 605_sweat_escape_cuomo_prison |
| 606 | maggi - noodle - nestle - noodles - instant | 17 | 606_maggi_noodle_nestle_noodles |
| 607 | orange - hall - strawletterdallon - attack - graffiti | 17 | 607_orange_hall_strawletterdallon_attack |
| 608 | named - person - swinney - supreme - no2np | 17 | 608_named_person_swinney_supreme |
| 609 | slade - bluebirds - cardiff - trollope - sheffield | 16 | 609_slade_bluebirds_cardiff_trollope |
| 610 | fog - airport - flight - heathrow - cancelled | 16 | 610_fog_airport_flight_heathrow |
| 611 | newport - rfc - dragons - rodney - wru | 16 | 611_newport_rfc_dragons_rodney |
| 612 | russian - jet - baltic - airspace - kuznetsov | 16 | 612_russian_jet_baltic_airspace |
| 613 | plainmoor - club - gulls - torquay - speedway | 16 | 613_plainmoor_club_gulls_torquay |
| 614 | odegaard - liga - atletico - rey - copa | 16 | 614_odegaard_liga_atletico_rey |
| 615 | cheating - bihar - exam - europol - rai | 16 | 615_cheating_bihar_exam_europol |
| 616 | americas - race - ainslie - 18002000 - oracle | 16 | 616_americas_race_ainslie_18002000 |
| 617 | gorsuch - garland - scalia - senate - republicans | 16 | 617_gorsuch_garland_scalia_senate |
| 618 | ford - toronto - fords - mayor - doug | 16 | 618_ford_toronto_fords_mayor |
| 619 | polanski - extradition - geimer - polish - polanskis | 16 | 619_polanski_extradition_geimer_polish |
| 620 | hpv - vaccination - vaccine - cervical - jcvi | 16 | 620_hpv_vaccination_vaccine_cervical |
| 621 | snowden - kong - hong - snowdens - asylum | 16 | 621_snowden_kong_hong_snowdens |
| 622 | japan - japanese - japans - abe - korea | 16 | 622_japan_japanese_japans_abe |
| 623 | seren - serens - pollock - inquest - coroner | 16 | 623_seren_serens_pollock_inquest |
| 624 | parryjones - severance - halsall - lieu - lcc | 16 | 624_parryjones_severance_halsall_lieu |
| 625 | plague - leprosy - disease - bubonic - rat | 16 | 625_plague_leprosy_disease_bubonic |
| 626 | hfea - embryo - jefferies - egg - loeb | 16 | 626_hfea_embryo_jefferies_egg |
| 627 | selfemployed - deliveroo - gig - courier - uber | 16 | 627_selfemployed_deliveroo_gig_courier |
| 628 | jewish - antisemitism - antisemitic - cst - hate | 16 | 628_jewish_antisemitism_antisemitic_cst |
| 629 | cake - gingerbread - baker - gebhart - icing | 16 | 629_cake_gingerbread_baker_gebhart |
| 630 | mcquire - sousse - rezgui - silence - tunisian | 16 | 630_mcquire_sousse_rezgui_silence |
| 631 | shkreli - daraprim - turing - pharmaceuticals - retrophin | 16 | 631_shkreli_daraprim_turing_pharmaceuticals |
| 632 | coventry - ricoh - acl - otium - sisu | 16 | 632_coventry_ricoh_acl_otium |
| 633 | radio - presenter - purves - breakfast - show | 16 | 633_radio_presenter_purves_breakfast |
| 634 | pricing - minimum - alcohol - swa - whisky | 16 | 634_pricing_minimum_alcohol_swa |
| 635 | quiz - quizzes - brainteaser - beatles - caldwell | 16 | 635_quiz_quizzes_brainteaser_beatles |
| 636 | muntari - racist - cagliari - serie - sarri | 15 | 636_muntari_racist_cagliari_serie |
| 637 | bomb - unexploded - evacuation - blitz - koblenz | 15 | 637_bomb_unexploded_evacuation_blitz |
| 638 | citigroup - revenue - jp - bank - fargo | 15 | 638_citigroup_revenue_jp_bank |
| 639 | infantino - fifa - eca - cup - tournament | 15 | 639_infantino_fifa_eca_cup |
| 640 | ansley - republic - danson - argentina - giselle | 15 | 640_ansley_republic_danson_argentina |
| 641 | bishop - ball - lewes - carey - church | 15 | 641_bishop_ball_lewes_carey |
| 642 | cpr - defibrillator - cardiac - compression - resuscitation | 15 | 642_cpr_defibrillator_cardiac_compression |
| 643 | guardiola - kompany - valenti - barcelona - pellegrini | 15 | 643_guardiola_kompany_valenti_barcelona |
| 644 | wreck - woolsgrove - cannon - ship - heritage | 15 | 644_wreck_woolsgrove_cannon_ship |
| 645 | heel - dress - wear - thorp - code | 15 | 645_heel_dress_wear_thorp |
| 646 | tomb - mummy - pyramid - tutankhamuns - scan | 15 | 646_tomb_mummy_pyramid_tutankhamuns |
| 647 | doddfrank - financial - volcker - banking - bank | 15 | 647_doddfrank_financial_volcker_banking |
| 648 | rateable - revaluation - rate - business - value | 15 | 648_rateable_revaluation_rate_business |
| 649 | - - - - | 15 | 649____ |
| 650 | poppy - 888246 - weeping - cummins - armistice | 15 | 650_poppy_888246_weeping_cummins |
| 651 | indigenous - aboriginal - trudeau - canadian - nations | 15 | 651_indigenous_aboriginal_trudeau_canadian |
| 652 | trump - farage - relationship - trumps - presidentelect | 15 | 652_trump_farage_relationship_trumps |
| 653 | grainger - culcheth - teague - gmp - graingers | 15 | 653_grainger_culcheth_teague_gmp |
| 654 | cathedral - seminary - hinterland - chapel - peters | 15 | 654_cathedral_seminary_hinterland_chapel |
| 655 | arran - ayrshire - crosshouse - maternity - review | 15 | 655_arran_ayrshire_crosshouse_maternity |
| 656 | bailey - cults - knife - gwynne - duguid | 15 | 656_bailey_cults_knife_gwynne |
| 657 | cloud - microsoft - yahoo - skype - nadella | 15 | 657_cloud_microsoft_yahoo_skype |
| 658 | deaf - bsl - language - sign - skelding | 15 | 658_deaf_bsl_language_sign |
| 659 | name - girls - boys - boy - popular | 15 | 659_name_girls_boys_boy |
| 660 | winterbourne - panorama - care - deanery - oesophageal | 15 | 660_winterbourne_panorama_care_deanery |
| 661 | satoshi - bitcoin - nakamoto - wright - bitcoins | 14 | 661_satoshi_bitcoin_nakamoto_wright |
| 662 | image - body - cosmetic - selfesteem - beresford | 14 | 662_image_body_cosmetic_selfesteem |
| 663 | pte - beasting - punishment - sgt - williams | 14 | 663_pte_beasting_punishment_sgt |
| 664 | knox - sollecito - kercher - perugia - guede | 14 | 664_knox_sollecito_kercher_perugia |
| 665 | ladies - hockey - johannesburg - ranked - harte | 14 | 665_ladies_hockey_johannesburg_ranked |
| 666 | suicide - suicides - samaritans - mental - suicidal | 14 | 666_suicide_suicides_samaritans_mental |
| 667 | bridge - lorry - southbound - sudbrook - lecco | 14 | 667_bridge_lorry_southbound_sudbrook |
| 668 | youth - ncs - ea - unison - young | 14 | 668_youth_ncs_ea_unison |
| 669 | rezaian - iran - iranian - namazi - bahais | 14 | 669_rezaian_iran_iranian_namazi |
| 670 | orgreave - miner - miners - rudd - inquiry | 14 | 670_orgreave_miner_miners_rudd |
| 671 | earthquake - bgs - tremor - magnitude - seismologist | 14 | 671_earthquake_bgs_tremor_magnitude |
| 672 | sport - baumgardt - wales - board - chair | 14 | 672_sport_baumgardt_wales_board |
| 673 | follow - - - - | 14 | 673_follow___ |
| 674 | cyber - chinese - hacking - china - ip | 14 | 674_cyber_chinese_hacking_china |
| 675 | onion - dosa - indian - schezwan - masala | 14 | 675_onion_dosa_indian_schezwan |
| 676 | g4s - medway - panorama - rainsbrook - staff | 14 | 676_g4s_medway_panorama_rainsbrook |
| 677 | afc - wimbledon - lyle - bury - dean | 14 | 677_afc_wimbledon_lyle_bury |
| 678 | dog - meat - yulin - animal - festival | 14 | 678_dog_meat_yulin_animal |
| 679 | haigh - gfh - dubai - uae - haighs | 14 | 679_haigh_gfh_dubai_uae |
| 680 | parking - hospital - charge - car - nhs | 14 | 680_parking_hospital_charge_car |
| 681 | airbnb - chesky - rent - airbnbs - botsman | 14 | 681_airbnb_chesky_rent_airbnbs |
| 682 | wall - keane - liberton - wallisbennett - keanes | 14 | 682_wall_keane_liberton_wallisbennett |
| 683 | tree - christmas - wassail - switchon - festive | 14 | 683_tree_christmas_wassail_switchon |
| 684 | call - 999 - caller - handler - calls | 14 | 684_call_999_caller_handler |
| 685 | ship - faro - tote - cruises - crew | 14 | 685_ship_faro_tote_cruises |
| 686 | prince - princes - letter - veto - charles | 14 | 686_prince_princes_letter_veto |
| 687 | cliff - coast - dorset - jurassic - rock | 14 | 687_cliff_coast_dorset_jurassic |
| 688 | purnama - jakarta - widodo - indonesia - rizieq | 14 | 688_purnama_jakarta_widodo_indonesia |
| 689 | pregnancy - abortion - rakh - termination - girl | 13 | 689_pregnancy_abortion_rakh_termination |
| 690 | dnar - maddisons - trust - baby - hospital | 13 | 690_dnar_maddisons_trust_baby |
| 691 | mallya - kingfisher - mallyas - businessman - ram | 13 | 691_mallya_kingfisher_mallyas_businessman |
| 692 | sampson - twickenham - rugby - england - maafu | 13 | 692_sampson_twickenham_rugby_england |
| 693 | hiroshima - nagasaki - bomb - atomic - kyoto | 13 | 693_hiroshima_nagasaki_bomb_atomic |
| 694 | ecclestone - gribkowsky - constantin - ecclestones - f1 | 13 | 694_ecclestone_gribkowsky_constantin_ecclestones |
| 695 | growth - construction - rd - nicei - output | 13 | 695_growth_construction_rd_nicei |
| 696 | bt - sky - premier - 4k - tv | 13 | 696_bt_sky_premier_4k |
| 697 | cardiff - region - bay - metro - swansea | 13 | 697_cardiff_region_bay_metro |
| 698 | munoz - airlines - airline - aviation - continental | 13 | 698_munoz_airlines_airline_aviation |
| 699 | nrl - widnes - super - purtell - rhinos | 13 | 699_nrl_widnes_super_purtell |
| 700 | mckeown - trolley - guinness - speed - record | 13 | 700_mckeown_trolley_guinness_speed |
| 701 | ethiopia - oromo - oromia - ethiopian - pankhurst | 13 | 701_ethiopia_oromo_oromia_ethiopian |
| 702 | weapon - airgun - firearm - licensing - licence | 13 | 702_weapon_airgun_firearm_licensing |
| 703 | mcdonalds - jollibee - restaurant - panera - fastfood | 13 | 703_mcdonalds_jollibee_restaurant_panera |
| 704 | listener - rajar - listeners - weekly - radio | 13 | 704_listener_rajar_listeners_weekly |
| 705 | santos - angola - mpla - unita - angolas | 13 | 705_santos_angola_mpla_unita |
| 706 | train - glenfinnan - viaduct - fasting - railway | 13 | 706_train_glenfinnan_viaduct_fasting |
| 707 | tattoo - tattooists - tattooing - piercing - tattooist | 13 | 707_tattoo_tattooists_tattooing_piercing |
| 708 | brizzi - semple - semples - pc - meth | 13 | 708_brizzi_semple_semples_pc |
| 709 | india - indian - china - chinese - border | 13 | 709_india_indian_china_chinese |
| 710 | saudi - arabia - king - camel - prince | 13 | 710_saudi_arabia_king_camel |
| 711 | balakrishnan - balakrishnans - commune - aravindan - cult | 13 | 711_balakrishnan_balakrishnans_commune_aravindan |
| 712 | turkington - smiley - race - shedden - btcc | 13 | 712_turkington_smiley_race_shedden |
| 713 | grenfell - kensington - pagetbrown - tower - aghlani | 13 | 713_grenfell_kensington_pagetbrown_tower |
| 714 | apd - passenger - tax - airport - duty | 13 | 714_apd_passenger_tax_airport |
| 715 | chua - insulin - saline - poisoning - nurse | 13 | 715_chua_insulin_saline_poisoning |
| 716 | extremism - radicalisation - teachers - teacher - extremist | 13 | 716_extremism_radicalisation_teachers_teacher |
| 717 | loan - undisclosed - unattached - free - qpr | 13 | 717_loan_undisclosed_unattached_free |
| 718 | gst - tax - jaitley - indias - mexico | 12 | 718_gst_tax_jaitley_indias |
| 719 | 888 - william - hill - bwin - gvc | 12 | 719_888_william_hill_bwin |
| 720 | nyomi - liam - rachel - fee - liams | 12 | 720_nyomi_liam_rachel_fee |
| 721 | stopandsearch - search - consensual - stop - searched | 12 | 721_stopandsearch_search_consensual_stop |
| 722 | keane - oneill - republic - oshea - squad | 12 | 722_keane_oneill_republic_oshea |
| 723 | marriage - samesex - abbott - gay - labor | 12 | 723_marriage_samesex_abbott_gay |
| 724 | expedition - pole - curie - luke - trek | 12 | 724_expedition_pole_curie_luke |
| 725 | eritrea - eritrean - seun - alhacen - migrant | 12 | 725_eritrea_eritrean_seun_alhacen |
| 726 | bg - shell - shells - oil - beurden | 12 | 726_bg_shell_shells_oil |
| 727 | monthbymonth - peachey - tip - calendar - finance | 12 | 727_monthbymonth_peachey_tip_calendar |
| 728 | 150000 - salary - 199999 - 249999 - talent | 12 | 728_150000_salary_199999_249999 |
| 729 | deutsche - bank - mortgagebacked - banks - 14bn | 12 | 729_deutsche_bank_mortgagebacked_banks |
| 730 | handstand - handy - makeyourmove - triceps - toning | 12 | 730_handstand_handy_makeyourmove_triceps |
| 731 | merkel - trump - wulff - german - angela | 12 | 731_merkel_trump_wulff_german |
| 732 | farook - marquez - mateen - malik - fbi | 12 | 732_farook_marquez_mateen_malik |
| 733 | agm - nonvoting - dumfries - directors - palmerston | 12 | 733_agm_nonvoting_dumfries_directors |
| 734 | notebook - poem - dylan - makars - thomas | 12 | 734_notebook_poem_dylan_makars |
| 735 | rating - hygiene - food - display - ratings | 12 | 735_rating_hygiene_food_display |
| 736 | arches - licensing - venue - licence - cromer | 12 | 736_arches_licensing_venue_licence |
| 737 | putin - trump - russia - dugin - russian | 12 | 737_putin_trump_russia_dugin |
| 738 | transfer - lukaku - hasenhuttl - naby - pele | 12 | 738_transfer_lukaku_hasenhuttl_naby |
| 739 | acid - madasani - potes - saa - grillot | 12 | 739_acid_madasani_potes_saa |
| 740 | unemployment - wales - rate - welsh - employment | 12 | 740_unemployment_wales_rate_welsh |
| 741 | hawkeye - goalline - goalref - fifa - technology | 12 | 741_hawkeye_goalline_goalref_fifa |
| 742 | claudia - lawrence - malyn - lawrences - disappearance | 12 | 742_claudia_lawrence_malyn_lawrences |
| 743 | whiter - abbs - newmarket - whiters - adamec | 12 | 743_whiter_abbs_newmarket_whiters |
| 744 | opm - cyber - hacker - china - clapper | 12 | 744_opm_cyber_hacker_china |
| 745 | swansea - sigurdsson - swans - llorente - jenkins | 12 | 745_swansea_sigurdsson_swans_llorente |
| 746 | forbes - wealth - richest - billionaire - maggard | 12 | 746_forbes_wealth_richest_billionaire |
| 747 | battery - batteries - lithium - lithiumair - lithiumion | 12 | 747_battery_batteries_lithium_lithiumair |
| 748 | osprey - nest - chick - ej - garten | 12 | 748_osprey_nest_chick_ej |
| 749 | occupation - guernsey - fumero - memorial - rushen | 11 | 749_occupation_guernsey_fumero_memorial |
| 750 | rollsroyce - engine - aerospace - rollsroyces - rolls | 11 | 750_rollsroyce_engine_aerospace_rollsroyces |
| 751 | imf - forecast - imfs - growth - economy | 11 | 751_imf_forecast_imfs_growth |
| 752 | ship - yangtze - eastern - cao - capsized | 11 | 752_ship_yangtze_eastern_cao |
| 753 | emperor - akihito - throne - imperial - naruhito | 11 | 753_emperor_akihito_throne_imperial |
| 754 | wsl - tynan - ladies - notts - womens | 11 | 754_wsl_tynan_ladies_notts |
| 755 | dog - personality - wolf - animal - horse | 11 | 755_dog_personality_wolf_animal |
| 756 | singapore - singapores - lee - singaporeans - pap | 11 | 756_singapore_singapores_lee_singaporeans |
| 757 | bawagarba - sepsis - amaro - hadiza - jack | 11 | 757_bawagarba_sepsis_amaro_hadiza |
| 758 | brooks - pincher - hacking - murdoch - pharo | 11 | 758_brooks_pincher_hacking_murdoch |
| 759 | camera - coppergate - lendal - a9 - lane | 11 | 759_camera_coppergate_lendal_a9 |
| 760 | coffee - fruit - grains - banana - avocado | 11 | 760_coffee_fruit_grains_banana |
| 761 | india - modi - modis - nuclear - indias | 11 | 761_india_modi_modis_nuclear |
| 762 | massey - salford - masseys - feud - shooting | 11 | 762_massey_salford_masseys_feud |
| 763 | climate - lowcarbon - energy - emission - change | 11 | 763_climate_lowcarbon_energy_emission |
| 764 | wilks - harehills - mallik - chapeltown - leeds | 11 | 764_wilks_harehills_mallik_chapeltown |
| 765 | hav - airlander - cardington - hangar - aircraft | 11 | 765_hav_airlander_cardington_hangar |
| 766 | fentanyl - drug - heroin - prescription - painkiller | 11 | 766_fentanyl_drug_heroin_prescription |
| 767 | pfizer - astrazeneca - akzo - ppg - takeover | 11 | 767_pfizer_astrazeneca_akzo_ppg |
| 768 | bloodhound - ssc - thrust - rocket - 800mph | 11 | 768_bloodhound_ssc_thrust_rocket |
| 769 | belfast - lacuna - scheme - watkin - courthouse | 11 | 769_belfast_lacuna_scheme_watkin |
| 770 | badminton - gilmour - sindhu - kirsty - medal | 11 | 770_badminton_gilmour_sindhu_kirsty |
| 771 | ring - wedding - ariel - sherrington - paahlsson | 11 | 771_ring_wedding_ariel_sherrington |
| 772 | stack - m20 - lorry - kent - crosschannel | 11 | 772_stack_m20_lorry_kent |
| 773 | mesh - incontinence - implant - prolapse - essure | 11 | 773_mesh_incontinence_implant_prolapse |
| 774 | flint - water - snyder - bottled - leached | 11 | 774_flint_water_snyder_bottled |
| 775 | lough - dredging - sand - durkan - notice | 11 | 775_lough_dredging_sand_durkan |
| 776 | playback - supported - device - media - weirarcher | 11 | 776_playback_supported_device_media |
| 777 | weibo - qin - chinese - mizuhara - dalai | 11 | 777_weibo_qin_chinese_mizuhara |
| 778 | cycling - event - galloway - dumfries - tour | 11 | 778_cycling_event_galloway_dumfries |
| 779 | wheelchair - bus - paulley - pushchair - disabled | 11 | 779_wheelchair_bus_paulley_pushchair |
| 780 | welsh - dems - lib - plaid - budget | 11 | 780_welsh_dems_lib_plaid |
| 781 | mirza - abertillery - blackmail - farhan - woman | 11 | 781_mirza_abertillery_blackmail_farhan |
| 782 | dress - wore - burlesque - costume - underwear | 11 | 782_dress_wore_burlesque_costume |
| 783 | neymar - psg - neymars - barcelona - brazilian | 11 | 783_neymar_psg_neymars_barcelona |
| 784 | sarao - saraos - navinder - sell - flash | 10 | 784_sarao_saraos_navinder_sell |
| 785 | nato - juncker - defence - eu - mattis | 10 | 785_nato_juncker_defence_eu |
| 786 | mobile - aws - amazon - computing - cloud | 10 | 786_mobile_aws_amazon_computing |
| 787 | transgender - vikki - prison - gender - hmp | 10 | 787_transgender_vikki_prison_gender |
| 788 | puerto - urdangarin - princess - cristina - rico | 10 | 788_puerto_urdangarin_princess_cristina |
| 789 | turkey - turkish - russian - russia - putin | 10 | 789_turkey_turkish_russian_russia |
| 790 | michelin - chef - restaurant - violier - hawker | 10 | 790_michelin_chef_restaurant_violier |
| 791 | mcgarry - thomson - snp - whip - plumbly | 10 | 791_mcgarry_thomson_snp_whip |
| 792 | pegida - dresden - bachmann - german - rally | 10 | 792_pegida_dresden_bachmann_german |
| 793 | ai - alphago - chess - sedol - deepmind | 10 | 793_ai_alphago_chess_sedol |
| 794 | bath - busker - parkandride - somerset - quays | 10 | 794_bath_busker_parkandride_somerset |
| 795 | clock - tower - chime - bell - bong | 10 | 795_clock_tower_chime_bell |
| 796 | decay - dental - tooth - teeth - oral | 10 | 796_decay_dental_tooth_teeth |
| 797 | tillerson - putin - trump - russian - russia | 10 | 797_tillerson_putin_trump_russian |
| 798 | roma - genoa - totti - serie - landonio | 10 | 798_roma_genoa_totti_serie |
| 799 | gb - womens - olympics - team - football | 10 | 799_gb_womens_olympics_team |
| 800 | hyperloop - musk - pod - elon - transportation | 10 | 800_hyperloop_musk_pod_elon |
| 801 | xi - buckingham - chinese - visit - banquet | 10 | 801_xi_buckingham_chinese_visit |
| 802 | hajj - pilgrim - stampede - saudi - mina | 10 | 802_hajj_pilgrim_stampede_saudi |
| 803 | dj - serpellmorris - derek - glastonbury - thornbury | 10 | 803_dj_serpellmorris_derek_glastonbury |
| 804 | burton - derby - albion - ince - barnsley | 10 | 804_burton_derby_albion_ince |
| 805 | wheat - crop - harvest - food - yield | 10 | 805_wheat_crop_harvest_food |
| 806 | kaufman - latham - leyonhjelm - streeter - bew | 10 | 806_kaufman_latham_leyonhjelm_streeter |
| 807 | ofcom - belo - itv - broadcast - contestant | 10 | 807_ofcom_belo_itv_broadcast |
| 808 | cassano - bayern - passi - verona - bielsa | 9 | 808_cassano_bayern_passi_verona |
| 809 | egg - fipronil - eggs - fsa - dutch | 9 | 809_egg_fipronil_eggs_fsa |
| 810 | g4s - serco - tagging - sfo - overcharged | 9 | 810_g4s_serco_tagging_sfo |
| 811 | tapie - lagarde - lyonnais - lagardes - 404m | 9 | 811_tapie_lagarde_lyonnais_lagardes |
| 812 | iwf - image - hash - images - abuse | 9 | 812_iwf_image_hash_images |
| 813 | larne - maxwell - ciarn - psni - antipersonnel | 9 | 813_larne_maxwell_ciarn_psni |
| 814 | wettlaufer - cappuccini - azeez - anaesthetist - tunbridge | 9 | 814_wettlaufer_cappuccini_azeez_anaesthetist |
| 815 | ifa - portadown - elebert - carrick - warrenpoint | 9 | 815_ifa_portadown_elebert_carrick |
| 816 | turkey - erdogan - turkeys - visafree - eu | 9 | 816_turkey_erdogan_turkeys_visafree |
| 817 | daniels - krezolek - luczak - pathan - daniel | 9 | 817_daniels_krezolek_luczak_pathan |
| 818 | shoreham - airshow - caa - display - hawker | 9 | 818_shoreham_airshow_caa_display |
| 819 | tuam - secours - galway - bon - baby | 9 | 819_tuam_secours_galway_bon |
| 820 | forensic - fss - forensics - science - dna | 9 | 820_forensic_fss_forensics_science |
| 821 | violence - women - mexico - mabel - woman | 9 | 821_violence_women_mexico_mabel |
| 822 | brownhill - tomlin - raynor - clough - campbellryce | 9 | 822_brownhill_tomlin_raynor_clough |
| 823 | breastfeeding - claridges - breastfeed - baby - frangou | 9 | 823_breastfeeding_claridges_breastfeed_baby |
| 824 | theatre - arts - venue - henley - guild | 9 | 824_theatre_arts_venue_henley |
| 825 | saudi - arabia - iran - shia - bahrain | 9 | 825_saudi_arabia_iran_shia |
| 826 | corporation - tax - stormont - ireland - northern | 9 | 826_corporation_tax_stormont_ireland |
| 827 | adani - mine - coal - galilee - queensland | 9 | 827_adani_mine_coal_galilee |
| 828 | bridgend - ford - engine - plant - mini | 9 | 828_bridgend_ford_engine_plant |
| 829 | ets - exam - toeic - cscs - sia | 9 | 829_ets_exam_toeic_cscs |
| 830 | guiana - cgt - french - paris - strike | 9 | 830_guiana_cgt_french_paris |
| 831 | ntw - music - wno - mcgrath - cleo | 9 | 831_ntw_music_wno_mcgrath |
| 832 | laden - bin - zawahiri - ladens - alqaeda | 9 | 832_laden_bin_zawahiri_ladens |
| 833 | dizaei - uae - badawi - lash - alislah | 9 | 833_dizaei_uae_badawi_lash |
| 834 | amazon - aws - amazons - boutline - cloud | 9 | 834_amazon_aws_amazons_boutline |
| 835 | plane - turbulence - passenger - medan - crashed | 9 | 835_plane_turbulence_passenger_medan |
| 836 | firearm - defraine - shilling - gun - skorpion | 9 | 836_firearm_defraine_shilling_gun |
| 837 | dalai - tibet - tibetan - lama - tibetans | 9 | 837_dalai_tibet_tibetan_lama |
| 838 | robot - dynamics - robotics - raibert - robots | 9 | 838_robot_dynamics_robotics_raibert |
| 839 | itv - crozier - revenue - talpa - advertising | 9 | 839_itv_crozier_revenue_talpa |
| 840 | gay - samesex - marriage - italy - grech | 9 | 840_gay_samesex_marriage_italy |
| 841 | sweden - norway - switzerland - finland - denmark | 9 | 841_sweden_norway_switzerland_finland |
| 842 | pitt - jolie - paltrow - divorce - married | 9 | 842_pitt_jolie_paltrow_divorce |
| 843 | netflix - hbo - svod - subscriber - cable | 8 | 843_netflix_hbo_svod_subscriber |
| 844 | taiwan - china - taiwans - taiwanese - beijing | 8 | 844_taiwan_china_taiwans_taiwanese |
| 845 | bundy - rancher - refuge - federal - oregon | 8 | 845_bundy_rancher_refuge_federal |
| 846 | methamphetamine - drug - ice - australian - seizure | 8 | 846_methamphetamine_drug_ice_australian |
| 847 | thomson - hales - snp - whip - thomsons | 8 | 847_thomson_hales_snp_whip |
| 848 | mail - postman - postcode - dog - delivery | 8 | 848_mail_postman_postcode_dog |
| 849 | powell - gibbons - judo - judoka - rio | 8 | 849_powell_gibbons_judo_judoka |
| 850 | fa - rabbatts - reform - governance - fas | 8 | 850_fa_rabbatts_reform_governance |
| 851 | gezi - taksim - istanbul - erdogan - protester | 8 | 851_gezi_taksim_istanbul_erdogan |
| 852 | thompson - llanwrda - countersued - brewster - lenny | 8 | 852_thompson_llanwrda_countersued_brewster |
| 853 | exeter - stevenage - mansfield - doncaster - luton | 8 | 853_exeter_stevenage_mansfield_doncaster |
| 854 | burntwood - stephens - stephen - cancer - keech | 8 | 854_burntwood_stephens_stephen_cancer |
| 855 | coptic - tawadros - church - monastery - christians | 8 | 855_coptic_tawadros_church_monastery |
| 856 | ramblers - path - peak - trail - backpack | 8 | 856_ramblers_path_peak_trail |
| 857 | hmrc - purplebricks - dwp - jobcentre - agent | 8 | 857_hmrc_purplebricks_dwp_jobcentre |
| 858 | equality - discrimination - lgbti - bisson - tatchell | 8 | 858_equality_discrimination_lgbti_bisson |
| 859 | moira - gartshore - moiras - coatbridge - 1957 | 8 | 859_moira_gartshore_moiras_coatbridge |
| 860 | anbang - marriott - starwood - blackstone - waldorf | 8 | 860_anbang_marriott_starwood_blackstone |
| 861 | visa - poststudy - brains - brain - laggan | 8 | 861_visa_poststudy_brains_brain |
| 862 | minnock - ethan - wildblood - minnocks - ethans | 8 | 862_minnock_ethan_wildblood_minnocks |
| 863 | badminton - basketball - sport - achara - funding | 8 | 863_badminton_basketball_sport_achara |
| 864 | visitor - attraction - museum - visited - heritage | 8 | 864_visitor_attraction_museum_visited |
| 865 | hmic - child - exploitation - constabulary - protection | 8 | 865_hmic_child_exploitation_constabulary |
| 866 | pride - bisexual - event - festival - lesbian | 8 | 866_pride_bisexual_event_festival |
| 867 | baton - relay - torch - commonwealth - games | 8 | 867_baton_relay_torch_commonwealth |
| 868 | holbrook - leigh - keiron - helens - saints | 8 | 868_holbrook_leigh_keiron_helens |
| 869 | curling - rink - muirhead - gold - freestyle | 8 | 869_curling_rink_muirhead_gold |
| 870 | miffy - pooh - winniethepooh - rabbit - potter | 8 | 870_miffy_pooh_winniethepooh_rabbit |
| 871 | gaal - lampard - mourinho - van - gaals | 8 | 871_gaal_lampard_mourinho_van |
| 872 | pcs - nmw - museum - museums - weekend | 8 | 872_pcs_nmw_museum_museums |
| 873 | alexa - amazon - cerf - google - limp | 8 | 873_alexa_amazon_cerf_google |
| 874 | rio - landless - pitanguy - temer - brazils | 8 | 874_rio_landless_pitanguy_temer |
| 875 | coleman - colemans - gunter - euro - wales | 7 | 875_coleman_colemans_gunter_euro |
| 876 | tower - grenfell - fire - kensington - 24storey | 7 | 876_tower_grenfell_fire_kensington |
| 877 | asylum - quebec - canada - refugee - manitoba | 7 | 877_asylum_quebec_canada_refugee |
| 878 | s4c - carmarthen - egin - trinity - welsh | 7 | 878_s4c_carmarthen_egin_trinity |
| 879 | grimsby - crawley - town - shrewsbury - argyle | 7 | 879_grimsby_crawley_town_shrewsbury |
| 880 | jonesbishop - quins - trinity - pulver - sardis | 7 | 880_jonesbishop_quins_trinity_pulver |
| 881 | lerner - villa - garde - villas - manager | 7 | 881_lerner_villa_garde_villas |
| 882 | ivf - fertility - ccgs - cycle - treatment | 7 | 882_ivf_fertility_ccgs_cycle |
| 883 | allotment - lochee - uckfield - weymouth - pub | 7 | 883_allotment_lochee_uckfield_weymouth |
| 884 | toilet - harleston - twinning - urinating - apgid | 7 | 884_toilet_harleston_twinning_urinating |
| 885 | breast - cancer - mastectomy - knockers - knitted | 7 | 885_breast_cancer_mastectomy_knockers |
| 886 | rubbish - beirut - lebanon - salam - fenianos | 7 | 886_rubbish_beirut_lebanon_salam |
| 887 | rfl - bulls - super - bradford - club | 7 | 887_rfl_bulls_super_bradford |
| 888 | csl - evergrande - china - chadwick - chinese | 7 | 888_csl_evergrande_china_chadwick |
| 889 | snowden - nsa - intelligence - surveillance - spy | 7 | 889_snowden_nsa_intelligence_surveillance |
| 890 | pshe - sre - education - sex - bullying | 7 | 890_pshe_sre_education_sex |
| 891 | sexual - nspcc - sexting - abuse - child | 7 | 891_sexual_nspcc_sexting_abuse |
| 892 | titanic - hichens - nilsson - ship - aldridge | 7 | 892_titanic_hichens_nilsson_ship |
| 893 | hampstead - blackening - injuries - hgv - blairingone | 7 | 893_hampstead_blackening_injuries_hgv |
| 894 | bbcbreaking - fullest - breaking - refresh - alerts | 7 | 894_bbcbreaking_fullest_breaking_refresh |
| 895 | meeke - citroen - breen - ogier - rally | 7 | 895_meeke_citroen_breen_ogier |
| 896 | czech - havel - polish - vaclav - havels | 7 | 896_czech_havel_polish_vaclav |
| 897 | shia - mosque - adhamiya - kuwait - saudi | 7 | 897_shia_mosque_adhamiya_kuwait |
| 898 | strachan - chipolina - mcrae - brown - slovakia | 7 | 898_strachan_chipolina_mcrae_brown |
| 899 | tax - republicans - democrats - congress - senate | 7 | 899_tax_republicans_democrats_congress |
| 900 | cyprus - cypriot - levy - bank - bailout | 7 | 900_cyprus_cypriot_levy_bank |
| 901 | carnival - notting - hill - event - carnivals | 7 | 901_carnival_notting_hill_event |
| 902 | noakes - shep - jupp - daliso - quiz | 7 | 902_noakes_shep_jupp_daliso |
| 903 | watford - taylor - graham - villa - elton | 7 | 903_watford_taylor_graham_villa |
| 904 | rally - rallying - kextreme - nrw - denbighshire | 7 | 904_rally_rallying_kextreme_nrw |
| 905 | fca - saver - lending - rate - arrears | 7 | 905_fca_saver_lending_rate |
| 906 | bbl - paternostro - riders - cowan - trophy | 6 | 906_bbl_paternostro_riders_cowan |
| 907 | extinction - capitanian - carbon - spherule - anthropocene | 6 | 907_extinction_capitanian_carbon_spherule |
| 908 | greer - broadfoot - midfielder - signing - mckee | 6 | 908_greer_broadfoot_midfielder_signing |
| 909 | hull - tanks - coliseum - culture - ferens | 6 | 909_hull_tanks_coliseum_culture |
| 910 | rat - pest - coombs - pub - property | 6 | 910_rat_pest_coombs_pub |
| 911 | sale - ons - retail - dfs - volume | 6 | 911_sale_ons_retail_dfs |
| 912 | wallasey - pc - czyz - phillips - ojedarodriguez | 6 | 912_wallasey_pc_czyz_phillips |
| 913 | buy - mortgage - renting - deposit - property | 6 | 913_buy_mortgage_renting_deposit |
| 914 | vinicius - cavani - bayern - transfer - boateng | 6 | 914_vinicius_cavani_bayern_transfer |
| 915 | ireland - trade - esri - iem - northern | 6 | 915_ireland_trade_esri_iem |
| 916 | music - orchestra - instrument - endowment - teaching | 6 | 916_music_orchestra_instrument_endowment |
| 917 | mental - triage - health - custody - kent | 6 | 917_mental_triage_health_custody |
| 918 | citizenship - dual - senator - citizen - ludlam | 6 | 918_citizenship_dual_senator_citizen |
| 919 | flower - flowered - garden - arum - botanic | 6 | 919_flower_flowered_garden_arum |
| 920 | rainsy - hun - sen - cambodia - penh | 6 | 920_rainsy_hun_sen_cambodia |
| 921 | boxer - boxing - bout - taylor - olympic | 6 | 921_boxer_boxing_bout_taylor |
| 922 | zabel - sketch - courtroom - artist - elveden | 6 | 922_zabel_sketch_courtroom_artist |
| 923 | westley - bentley - baraclough - gregory - club | 6 | 923_westley_bentley_baraclough_gregory |
| 924 | lochte - feigen - olympic - bentz - swimmer | 6 | 924_lochte_feigen_olympic_bentz |
| 925 | ticket - ticketmaster - resale - ticketing - tout | 6 | 925_ticket_ticketmaster_resale_ticketing |
| 926 | merger - boerses - ao - rentokil - deutsche | 6 | 926_merger_boerses_ao_rentokil |
| 927 | restoration - peer - mps - westminster - palace | 6 | 927_restoration_peer_mps_westminster |
| 928 | payphones - bt - kiosk - phone - payphone | 6 | 928_payphones_bt_kiosk_phone |
| 929 | brady - hindley - rhattigan - ashworth - bradys | 6 | 929_brady_hindley_rhattigan_ashworth |
| 930 | turnerconn - midwife - pip - childbirth - maternity | 6 | 930_turnerconn_midwife_pip_childbirth |
| 931 | mayoral - liverpool - rotheram - mayor - region | 6 | 931_mayoral_liverpool_rotheram_mayor |
| 932 | balcony - berkeley - donohoe - lorcn - irish | 6 | 932_balcony_berkeley_donohoe_lorcn |
| 933 | sheriff - driving - goto - kozlowski - duncan | 6 | 933_sheriff_driving_goto_kozlowski |
| 934 | utv - stv - itv - channel - pitts | 6 | 934_utv_stv_itv_channel |
| 935 | bonar - prostasia - steroid - adderall - ukad | 6 | 935_bonar_prostasia_steroid_adderall |
| 936 | worcester - kuper - stourbridge - promotion - football | 6 | 936_worcester_kuper_stourbridge_promotion |
| 937 | cellino - fun88 - marinakis - rosler - leeds | 6 | 937_cellino_fun88_marinakis_rosler |
| 938 | alamgir - terrorism - arranging - istiak - ziaur | 6 | 938_alamgir_terrorism_arranging_istiak |
| 939 | aqap - mukalla - yemen - alqaeda - zinjibar | 6 | 939_aqap_mukalla_yemen_alqaeda |
| 940 | mysportingsoundtrack - stanford - live - inverdale - 060009005 | 6 | 940_mysportingsoundtrack_stanford_live_inverdale |
| 941 | ramsey - moldova - coleman - arsenals - arsenal | 6 | 941_ramsey_moldova_coleman_arsenals |
| 942 | water - antiwater - irish - billing - euro | 6 | 942_water_antiwater_irish_billing |
| 943 | mackintosh - gsa - projector - pagepark - restoration | 6 | 943_mackintosh_gsa_projector_pagepark |
| 944 | trump - waterboarding - republican - gorka - cpac | 6 | 944_trump_waterboarding_republican_gorka |
| 945 | pendle - witches - museum - dorchester - spooks | 6 | 945_pendle_witches_museum_dorchester |
| 946 | bloomfield - melanoma - sun - colin - uv | 6 | 946_bloomfield_melanoma_sun_colin |
| 947 | sinai - morsi - elarish - cairo - militant | 6 | 947_sinai_morsi_elarish_cairo |
| 948 | nfl - rams - raiders - chargers - jaguars | 6 | 948_nfl_rams_raiders_chargers |
| 949 | chile - peru - bolivia - chilean - peruvian | 6 | 949_chile_peru_bolivia_chilean |
| 950 | sony - sonys - pascal - rudin - lizard | 6 | 950_sony_sonys_pascal_rudin |
| 951 | xi - china - li - communist - chinas | 6 | 951_xi_china_li_communist |
| 952 | tunny - lobban - puzzle - whetter - turing | 6 | 952_tunny_lobban_puzzle_whetter |
| 953 | beeks - quigley - tozer - oli - twoyear | 5 | 953_beeks_quigley_tozer_oli |
| 954 | bach - olympics - athlete - olympic - floorball | 5 | 954_bach_olympics_athlete_olympic |
| 955 | marriage - samesex - gay - equality - ireland | 5 | 955_marriage_samesex_gay_equality |
| 956 | ennismore - hotel - apex - gleneagles - oswestry | 5 | 956_ennismore_hotel_apex_gleneagles |
| 957 | sabre - engine - skylon - rel - precooler | 5 | 957_sabre_engine_skylon_rel |
| 958 | lukaku - mourinho - jese - everton - trafford | 5 | 958_lukaku_mourinho_jese_everton |
| 959 | mccollum - peru - cocaine - reid - peruvian | 5 | 959_mccollum_peru_cocaine_reid |
| 960 | internet - cac - firewall - cyberspace - gateway | 5 | 960_internet_cac_firewall_cyberspace |
| 961 | jailed - hobbs - m5 - burmantofts - driver | 5 | 961_jailed_hobbs_m5_burmantofts |
| 962 | grindelwald - beasts - dumbledore - potter - wizard | 5 | 962_grindelwald_beasts_dumbledore_potter |
| 963 | carta - magna - 1215 - charter - copy | 5 | 963_carta_magna_1215_charter |
| 964 | pope - francis - easter - urbi - orbi | 5 | 964_pope_francis_easter_urbi |
| 965 | unionist - paramilitary - ira - sinn - loyalist | 5 | 965_unionist_paramilitary_ira_sinn |
| 966 | seawright - gloag - hoonjan - sloane - martin | 5 | 966_seawright_gloag_hoonjan_sloane |
| 967 | 2024 - ioc - budapest - bid - 2028 | 5 | 967_2024_ioc_budapest_bid |
| 968 | vat - calbee - soba - pasty - bgf | 5 | 968_vat_calbee_soba_pasty |
| 969 | child - crichton - referral - nspcc - parent | 5 | 969_child_crichton_referral_nspcc |
| 970 | eurozone - esm - bailouts - bailout - efsm | 5 | 970_eurozone_esm_bailouts_bailout |
| 971 | skeoch - avivas - swip - outflow - pitheavlis | 5 | 971_skeoch_avivas_swip_outflow |
| 972 | sky - skys - cable - settop - channel | 5 | 972_sky_skys_cable_settop |
| 973 | interrogation - cia - torture - waterboarding - zubaydah | 5 | 973_interrogation_cia_torture_waterboarding |
| 974 | hatherley - lda - brutalust - dandara - postwar | 5 | 974_hatherley_lda_brutalust_dandara |
| 975 | khmer - rouge - chea - nuon - cambodia | 5 | 975_khmer_rouge_chea_nuon |
| 976 | tenby - cheruiyot - marathon - hawkins - ironman | 5 | 976_tenby_cheruiyot_marathon_hawkins |
| 977 | zaidi - wolfpack - burnett - melville - grading | 5 | 977_zaidi_wolfpack_burnett_melville |
| 978 | cothill - nel - clennel - irene - mrs | 5 | 978_cothill_nel_clennel_irene |
| 979 | chelsea - demichelis - pellegrini - martn - bertrand | 5 | 979_chelsea_demichelis_pellegrini_martn |
| 980 | gamergate - sarkeesian - harassment - online - feminist | 5 | 980_gamergate_sarkeesian_harassment_online |
| 981 | model - catwalk - frankum - obese - roxane | 5 | 981_model_catwalk_frankum_obese |
| 982 | hoy - kennaugh - manx - cycling - mans | 5 | 982_hoy_kennaugh_manx_cycling |
| 983 | racism - frimpong - racist - zenit - russian | 5 | 983_racism_frimpong_racist_zenit |
| 984 | ballas - fidler - bankstown - wokingham - marion | 5 | 984_ballas_fidler_bankstown_wokingham |
| 985 | canal - waterway - oakham - towpath - lock | 5 | 985_canal_waterway_oakham_towpath |
| 986 | refugee - displaced - asylum - francken - migrant | 5 | 986_refugee_displaced_asylum_francken |
| 987 | merger - flintshire - andrews - wlga - wrexham | 5 | 987_merger_flintshire_andrews_wlga |
| 988 | vat - sanitary - rate - tampon - eu | 5 | 988_vat_sanitary_rate_tampon |
| 989 | tunisian - muhammad - hijab - tunisians - tunisias | 5 | 989_tunisian_muhammad_hijab_tunisians |
| 990 | sophies - microcephaly - laney - screening - abiageal | 5 | 990_sophies_microcephaly_laney_screening |
| 991 | zika - virus - rio - verrill - halep | 5 | 991_zika_virus_rio_verrill |
| 992 | adebolajo - lapshyn - conington - rigby - mosque | 5 | 992_adebolajo_lapshyn_conington_rigby |
| 993 | glenfield - surgery - heart - openheart - caithness | 5 | 993_glenfield_surgery_heart_openheart |
| 994 | wsl - pfa - chambers - stoney - passmoor | 5 | 994_wsl_pfa_chambers_stoney |
| 995 | efl - competition - invitation - under21 - sattelmaier | 5 | 995_efl_competition_invitation_under21 |
| 996 | hinds - helicopter - tandragee - tvaa - ambulance | 5 | 996_hinds_helicopter_tandragee_tvaa |
| 997 | mental - nsft - norfolk - health - trust | 5 | 997_mental_nsft_norfolk_health |
| 998 | un - syrians - syria - humanitarian - syrian | 5 | 998_un_syrians_syria_humanitarian |
| 999 | super - segeyaro - mcdermott - rhinos - leeds | 5 | 999_super_segeyaro_mcdermott_rhinos |
| 1000 | climate - emission - carbon - hulot - indcs | 5 | 1000_climate_emission_carbon_hulot |
| 1001 | homelessness - accommodation - homeless - temporary - household | 5 | 1001_homelessness_accommodation_homeless_temporary |
| 1002 | pyrgos - sharks - horne - coetzee - schmidt | 5 | 1002_pyrgos_sharks_horne_coetzee |
| 1003 | outlander - film - visitscotland - movie - filming | 5 | 1003_outlander_film_visitscotland_movie |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.10.12
|
AY00/dqn-SpaceInvadersNoFrameskip-v4
|
AY00
| 2023-08-19T22:07:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-19T22:06:38Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 545.50 +/- 107.62
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AY00 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AY00 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AY00
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.00011),
('learning_starts', 100000),
('n_timesteps', 1200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_6789_50000_25000_v1_train
|
KingKazma
| 2023-08-19T21:44:55Z | 3 | 1 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-19T21:44:54Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# xsum_6789_50000_25000_v1_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/xsum_6789_50000_25000_v1_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 255
* Number of training documents: 50000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - mr - police - people - would | 5 | -1_said_mr_police_people |
| 0 | league - goal - win - game - foul | 24458 | 0_league_goal_win_game |
| 1 | labour - eu - party - vote - referendum | 7343 | 1_labour_eu_party_vote |
| 2 | olympic - athlete - race - sport - gold | 1358 | 2_olympic_athlete_race_sport |
| 3 | cricket - wicket - england - test - batsman | 1144 | 3_cricket_wicket_england_test |
| 4 | school - education - teacher - pupil - student | 781 | 4_school_education_teacher_pupil |
| 5 | rail - transport - rmt - train - bridge | 482 | 5_rail_transport_rmt_train |
| 6 | nhs - care - health - patient - hospital | 477 | 6_nhs_care_health_patient |
| 7 | boko - haram - president - african - africa | 471 | 7_boko_haram_president_african |
| 8 | syrian - syria - assad - rebel - iraq | 448 | 8_syrian_syria_assad_rebel |
| 9 | fire - blaze - smoke - firefighter - rescue | 345 | 9_fire_blaze_smoke_firefighter |
| 10 | murray - wimbledon - tennis - slam - seed | 290 | 10_murray_wimbledon_tennis_slam |
| 11 | film - actor - star - movie - award | 266 | 11_film_actor_star_movie |
| 12 | dup - sinn - fin - ireland - northern | 258 | 12_dup_sinn_fin_ireland |
| 13 | fight - boxing - champion - title - fury | 257 | 13_fight_boxing_champion_title |
| 14 | crash - road - collision - driver - car | 247 | 14_crash_road_collision_driver |
| 15 | mercedes - hamilton - f1 - race - rosberg | 238 | 15_mercedes_hamilton_f1_race |
| 16 | coastguard - lifeboat - rescue - boat - rnli | 235 | 16_coastguard_lifeboat_rescue_boat |
| 17 | china - chinese - hong - kong - chinas | 230 | 17_china_chinese_hong_kong |
| 18 | ukraine - russian - russia - ukrainian - putin | 221 | 18_ukraine_russian_russia_ukrainian |
| 19 | taliban - pakistan - afghan - pakistani - afghanistan | 218 | 19_taliban_pakistan_afghan_pakistani |
| 20 | mcilroy - golf - birdie - open - par | 218 | 20_mcilroy_golf_birdie_open |
| 21 | dog - animal - dogs - cat - rspca | 208 | 21_dog_animal_dogs_cat |
| 22 | data - security - nsa - computer - malware | 207 | 22_data_security_nsa_computer |
| 23 | cancer - patient - treatment - disease - cell | 198 | 23_cancer_patient_treatment_disease |
| 24 | maduro - venezuela - mexico - morales - president | 198 | 24_maduro_venezuela_mexico_morales |
| 25 | energy - climate - gas - wind - carbon | 197 | 25_energy_climate_gas_wind |
| 26 | sexual - indecent - court - assault - victim | 182 | 26_sexual_indecent_court_assault |
| 27 | sale - store - retail - tesco - retailer | 174 | 27_sale_store_retail_tesco |
| 28 | marriage - church - bishop - samesex - cardinal | 172 | 28_marriage_church_bishop_samesex |
| 29 | album - song - music - band - chart | 170 | 29_album_song_music_band |
| 30 | apple - samsung - phone - technology - mobile | 163 | 30_apple_samsung_phone_technology |
| 31 | trump - republican - clinton - republicans - mr | 156 | 31_trump_republican_clinton_republicans |
| 32 | yn - ar - wedi - ei - mae | 148 | 32_yn_ar_wedi_ei |
| 33 | ebola - virus - vaccine - outbreak - zika | 148 | 33_ebola_virus_vaccine_outbreak |
| 34 | planet - particle - earth - space - universe | 143 | 34_planet_particle_earth_space |
| 35 | flood - flooding - water - weather - rain | 139 | 35_flood_flooding_water_weather |
| 36 | migrant - refugee - asylum - hungary - migrants | 138 | 36_migrant_refugee_asylum_hungary |
| 37 | korea - north - korean - kim - missile | 130 | 37_korea_north_korean_kim |
| 38 | paris - french - attack - france - police | 122 | 38_paris_french_attack_france |
| 39 | memorial - war - battle - soldier - regiment | 120 | 39_memorial_war_battle_soldier |
| 40 | plane - flight - aircraft - pilot - airport | 112 | 40_plane_flight_aircraft_pilot |
| 41 | man - det - incident - wearing - police | 111 | 41_man_det_incident_wearing |
| 42 | art - painting - artist - exhibition - gallery | 109 | 42_art_painting_artist_exhibition |
| 43 | growth - rate - economy - inflation - bank | 106 | 43_growth_rate_economy_inflation |
| 44 | prison - prisoner - prisons - offender - prisoners | 104 | 44_prison_prisoner_prisons_offender |
| 45 | bank - banking - barclays - hsbc - rbs | 103 | 45_bank_banking_barclays_hsbc |
| 46 | earthquake - quake - nepal - magnitude - hurricane | 101 | 46_earthquake_quake_nepal_magnitude |
| 47 | shooting - police - officer - black - gun | 100 | 47_shooting_police_officer_black |
| 48 | greece - greek - eurozone - bailout - greeces | 99 | 48_greece_greek_eurozone_bailout |
| 49 | housing - price - property - house - home | 94 | 49_housing_price_property_house |
| 50 | morsi - egypt - brotherhood - egyptian - cairo | 93 | 50_morsi_egypt_brotherhood_egyptian |
| 51 | airport - heathrow - runway - flight - gatwick | 92 | 51_airport_heathrow_runway_flight |
| 52 | murder - arrested - suspicion - custody - postmortem | 92 | 52_murder_arrested_suspicion_custody |
| 53 | zoo - tiger - animal - elephant - rhino | 90 | 53_zoo_tiger_animal_elephant |
| 54 | festival - event - music - edinburgh - organiser | 90 | 54_festival_event_music_edinburgh |
| 55 | book - novel - author - prize - writer | 89 | 55_book_novel_author_prize |
| 56 | snooker - osullivan - frame - world - gerwen | 88 | 56_snooker_osullivan_frame_world |
| 57 | unsupported - updated - playback - device - media | 87 | 57_unsupported_updated_playback_device |
| 58 | india - indias - delhi - indian - woman | 86 | 58_india_indias_delhi_indian |
| 59 | trust - death - care - hospital - baby | 86 | 59_trust_death_care_hospital |
| 60 | bbc - licence - s4c - fee - wales | 84 | 60_bbc_licence_s4c_fee |
| 61 | prince - queen - royal - duchess - duke | 78 | 61_prince_queen_royal_duchess |
| 62 | index - benchmark - nikkei - chinas - growth | 78 | 62_index_benchmark_nikkei_chinas |
| 63 | abuse - police - child - sexual - exploitation | 77 | 63_abuse_police_child_sexual |
| 64 | belfast - ira - finucane - murder - family | 76 | 64_belfast_ira_finucane_murder |
| 65 | council - site - development - building - regeneration | 74 | 65_council_site_development_building |
| 66 | obesity - sugar - food - obese - drink | 73 | 66_obesity_sugar_food_obese |
| 67 | murder - court - heard - knife - trial | 72 | 67_murder_court_heard_knife |
| 68 | steel - tata - talbot - plant - port | 71 | 68_steel_tata_talbot_plant |
| 69 | bird - wildlife - birds - rspb - conservation | 67 | 69_bird_wildlife_birds_rspb |
| 70 | drug - cannabis - heroin - drugs - marijuana | 64 | 70_drug_cannabis_heroin_drugs |
| 71 | pen - macron - fillon - le - french | 61 | 71_pen_macron_fillon_le |
| 72 | sp - nasdaq - dow - rose - index | 61 | 72_sp_nasdaq_dow_rose |
| 73 | israel - israeli - palestinian - palestinians - hamas | 59 | 73_israel_israeli_palestinian_palestinians |
| 74 | ftse - shares - share - pound - index | 59 | 74_ftse_shares_share_pound |
| 75 | updated - gmt - 2017 - bst - last | 57 | 75_updated_gmt_2017_bst |
| 76 | broadband - bt - ofcom - openreach - customer | 57 | 76_broadband_bt_ofcom_openreach |
| 77 | vw - emission - car - volkswagen - diesel | 57 | 77_vw_emission_car_volkswagen |
| 78 | iran - nuclear - irans - iranian - rouhani | 56 | 78_iran_nuclear_irans_iranian |
| 79 | cushnahan - nama - ireland - northern - ni | 55 | 79_cushnahan_nama_ireland_northern |
| 80 | alcohol - drinking - drink - wine - minimum | 53 | 80_alcohol_drinking_drink_wine |
| 81 | fraud - money - court - judge - account | 53 | 81_fraud_money_court_judge |
| 82 | pope - vatican - francis - church - catholic | 53 | 82_pope_vatican_francis_church |
| 83 | pollution - air - emission - nitrogen - no2 | 52 | 83_pollution_air_emission_nitrogen |
| 84 | pokemon - game - console - nintendo - vr | 52 | 84_pokemon_game_console_nintendo |
| 85 | driver - road - camera - driving - speed | 52 | 85_driver_road_camera_driving |
| 86 | waste - recycling - bag - plastic - food | 52 | 86_waste_recycling_bag_plastic |
| 87 | farc - peace - eln - rebel - colombian | 50 | 87_farc_peace_eln_rebel |
| 88 | berlusconi - pp - rajoy - spains - catalan | 50 | 88_berlusconi_pp_rajoy_spains |
| 89 | thailand - thai - king - yingluck - thailands | 49 | 89_thailand_thai_king_yingluck |
| 90 | quantum - computer - machine - computing - ai | 46 | 90_quantum_computer_machine_computing |
| 91 | kosovo - bosnian - serbia - serb - srebrenica | 45 | 91_kosovo_bosnian_serbia_serb |
| 92 | drug - cannabis - cocaine - drugs - court | 45 | 92_drug_cannabis_cocaine_drugs |
| 93 | rousseff - petrobras - temer - brazils - corruption | 45 | 93_rousseff_petrobras_temer_brazils |
| 94 | yemen - houthis - hadi - houthi - saudi | 44 | 94_yemen_houthis_hadi_houthi |
| 95 | tax - budget - chancellor - cut - spending | 44 | 95_tax_budget_chancellor_cut |
| 96 | train - tram - driver - raib - rail | 44 | 96_train_tram_driver_raib |
| 97 | fbi - comey - trump - clinton - email | 44 | 97_fbi_comey_trump_clinton |
| 98 | drone - aircraft - drones - aviation - unmanned | 43 | 98_drone_aircraft_drones_aviation |
| 99 | smoking - tobacco - cigarette - ecigarettes - smoker | 42 | 99_smoking_tobacco_cigarette_ecigarettes |
| 100 | hillsborough - disaster - liverpool - 1989 - crush | 41 | 100_hillsborough_disaster_liverpool_1989 |
| 101 | council - local - cut - budget - tax | 40 | 101_council_local_cut_budget |
| 102 | google - facebook - user - video - search | 40 | 102_google_facebook_user_video |
| 103 | syria - islamic - family - son - iraq | 39 | 103_syria_islamic_family_son |
| 104 | missing - search - body - police - seen | 38 | 104_missing_search_body_police |
| 105 | airline - airbus - airlines - aer - boeing | 38 | 105_airline_airbus_airlines_aer |
| 106 | car - psa - vehicle - gm - battery | 36 | 106_car_psa_vehicle_gm |
| 107 | fish - salmon - fishing - water - fishery | 36 | 107_fish_salmon_fishing_water |
| 108 | oil - gas - decommissioning - field - sea | 36 | 108_oil_gas_decommissioning_field |
| 109 | policing - police - constable - officer - spa | 35 | 109_policing_police_constable_officer |
| 110 | fire - cladding - grenfell - tower - block | 34 | 110_fire_cladding_grenfell_tower |
| 111 | nuclear - reactor - fukushima - plant - radiation | 33 | 111_nuclear_reactor_fukushima_plant |
| 112 | tree - woodland - trees - oak - forest | 32 | 112_tree_woodland_trees_oak |
| 113 | milk - dairy - farmer - farmers - farming | 32 | 113_milk_dairy_farmer_farmers |
| 114 | abortion - woman - termination - ireland - northern | 32 | 114_abortion_woman_termination_ireland |
| 115 | whale - dolphin - whales - sperm - orca | 32 | 115_whale_dolphin_whales_sperm |
| 116 | nauru - australia - asylum - australian - seeker | 31 | 116_nauru_australia_asylum_australian |
| 117 | driving - clarke - car - causing - crash | 31 | 117_driving_clarke_car_causing |
| 118 | stolen - police - bike - haldane - robbery | 31 | 118_stolen_police_bike_haldane |
| 119 | meat - horsemeat - milk - food - product | 31 | 119_meat_horsemeat_milk_food |
| 120 | wage - living - pay - minimum - worker | 30 | 120_wage_living_pay_minimum |
| 121 | belfast - flag - parade - parades - loyalist | 30 | 121_belfast_flag_parade_parades |
| 122 | terrorism - arrested - arrest - suspicion - police | 29 | 122_terrorism_arrested_arrest_suspicion |
| 123 | manchester - protest - ford - police - london | 29 | 123_manchester_protest_ford_police |
| 124 | uber - driver - taxi - ubers - kalanick | 28 | 124_uber_driver_taxi_ubers |
| 125 | calais - camp - migrant - jungle - asylum | 28 | 125_calais_camp_migrant_jungle |
| 126 | music - streaming - spotify - album - artist | 28 | 126_music_streaming_spotify_album |
| 127 | childrens - ofsted - child - council - improvement | 28 | 127_childrens_ofsted_child_council |
| 128 | erdogan - turkish - turkey - coup - istanbul | 28 | 128_erdogan_turkish_turkey_coup |
| 129 | cuba - cuban - castro - cubans - havana | 28 | 129_cuba_cuban_castro_cubans |
| 130 | libya - gaddafi - libyan - tripoli - gaddafis | 27 | 130_libya_gaddafi_libyan_tripoli |
| 131 | oil - barrel - opec - price - saudi | 26 | 131_oil_barrel_opec_price |
| 132 | trident - nuclear - submarine - renewal - defence | 26 | 132_trident_nuclear_submarine_renewal |
| 133 | pistorius - steenkamp - reeva - toilet - intruder | 26 | 133_pistorius_steenkamp_reeva_toilet |
| 134 | transgender - gay - marriage - law - samesex | 26 | 134_transgender_gay_marriage_law |
| 135 | space - astronaut - peake - tim - iss | 25 | 135_space_astronaut_peake_tim |
| 136 | pte - inquest - lcpl - cpl - soldier | 25 | 136_pte_inquest_lcpl_cpl |
| 137 | cox - jo - mp - batley - mrs | 24 | 137_cox_jo_mp_batley |
| 138 | jackpot - lottery - ticket - camelot - prize | 24 | 138_jackpot_lottery_ticket_camelot |
| 139 | pottery - roman - excavation - stone - site | 24 | 139_pottery_roman_excavation_stone |
| 140 | wikipedia - woman - women - makeup - female | 24 | 140_wikipedia_woman_women_makeup |
| 141 | energy - price - supplier - customer - gas | 23 | 141_energy_price_supplier_customer |
| 142 | dinosaur - specimen - fossil - neanderthals - museum | 23 | 142_dinosaur_specimen_fossil_neanderthals |
| 143 | yamaha - rossi - marquez - lorenzo - ducati | 23 | 143_yamaha_rossi_marquez_lorenzo |
| 144 | execution - death - drug - lethal - executions | 22 | 144_execution_death_drug_lethal |
| 145 | tesla - car - selfdriving - vehicle - autonomous | 22 | 145_tesla_car_selfdriving_vehicle |
| 146 | famine - drought - somalia - food - aid | 22 | 146_famine_drought_somalia_food |
| 147 | inquiry - abuse - survivor - goddard - inquirys | 21 | 147_inquiry_abuse_survivor_goddard |
| 148 | mh370 - plane - search - flight - ocean | 21 | 148_mh370_plane_search_flight |
| 149 | coin - museum - hoard - treasure - ring | 21 | 149_coin_museum_hoard_treasure |
| 150 | assange - wikileaks - extradition - embassy - assanges | 21 | 150_assange_wikileaks_extradition_embassy |
| 151 | ride - alton - smiler - towers - merlin | 20 | 151_ride_alton_smiler_towers |
| 152 | fm - radio - tv - freedom - medium | 19 | 152_fm_radio_tv_freedom |
| 153 | pension - annuity - retirement - income - pensions | 19 | 153_pension_annuity_retirement_income |
| 154 | homelessness - homeless - housing - rough - council | 19 | 154_homelessness_homeless_housing_rough |
| 155 | facebook - news - fake - medium - social | 19 | 155_facebook_news_fake_medium |
| 156 | trade - tpp - nafta - us - mexico | 19 | 156_trade_tpp_nafta_us |
| 157 | whisky - distillery - beer - scotch - bottle | 19 | 157_whisky_distillery_beer_scotch |
| 158 | court - trigg - heard - ms - eli | 19 | 158_court_trigg_heard_ms |
| 159 | nba - curry - lebron - warriors - cleveland | 19 | 159_nba_curry_lebron_warriors |
| 160 | ferry - calmac - serco - ferries - contract | 18 | 160_ferry_calmac_serco_ferries |
| 161 | hms - ship - navy - shipbuilding - warship | 18 | 161_hms_ship_navy_shipbuilding |
| 162 | syria - strike - iraq - mps - military | 18 | 162_syria_strike_iraq_mps |
| 163 | childcare - child - parent - inheritance - meal | 18 | 163_childcare_child_parent_inheritance |
| 164 | junior - doctor - bma - contract - doctors | 18 | 164_junior_doctor_bma_contract |
| 165 | 1916 - rising - irish - easter - ireland | 18 | 165_1916_rising_irish_easter |
| 166 | condor - guernsey - ship - poole - port | 17 | 166_condor_guernsey_ship_poole |
| 167 | hussain - terrorism - terrorist - heard - court | 17 | 167_hussain_terrorism_terrorist_heard |
| 168 | unemployment - ons - rate - employment - growth | 17 | 168_unemployment_ons_rate_employment |
| 169 | suu - kyi - nld - aung - thein | 16 | 169_suu_kyi_nld_aung |
| 170 | eurotunnel - calais - french - eurostar - train | 16 | 170_eurotunnel_calais_french_eurostar |
| 171 | bike - cycling - cycle - cyclist - parking | 15 | 171_bike_cycling_cycle_cyclist |
| 172 | breath - driving - drinkdriving - limit - driver | 15 | 172_breath_driving_drinkdriving_limit |
| 173 | everest - avalanche - mountain - sherpa - icefall | 15 | 173_everest_avalanche_mountain_sherpa |
| 174 | reef - coral - vent - seabed - marine | 15 | 174_reef_coral_vent_seabed |
| 175 | army - defence - mod - reserve - recruitment | 15 | 175_army_defence_mod_reserve |
| 176 | explosion - tianjin - bomb - blast - bethnal | 15 | 176_explosion_tianjin_bomb_blast |
| 177 | mayor - devolution - combined - greater - region | 15 | 177_mayor_devolution_combined_greater |
| 178 | tax - company - uk - cayman - profit | 15 | 178_tax_company_uk_cayman |
| 179 | muslims - ban - muslim - us - order | 15 | 179_muslims_ban_muslim_us |
| 180 | growth - output - sector - scotlands - scottish | 15 | 180_growth_output_sector_scotlands |
| 181 | suicide - acne - judith - life - mental | 14 | 181_suicide_acne_judith_life |
| 182 | bp - spill - oil - rig - deepwater | 14 | 182_bp_spill_oil_rig |
| 183 | xinjiang - uighur - uighurs - urumqi - chinese | 14 | 183_xinjiang_uighur_uighurs_urumqi |
| 184 | refugee - syrians - syria - syrian - refugees | 14 | 184_refugee_syrians_syria_syrian |
| 185 | rea - sykes - davies - fish - race | 14 | 185_rea_sykes_davies_fish |
| 186 | mortgage - lending - debt - insolvency - lender | 13 | 186_mortgage_lending_debt_insolvency |
| 187 | barnes - pilot - helicopter - crash - fog | 13 | 187_barnes_pilot_helicopter_crash |
| 188 | rhodes - statue - igbo - college - oriel | 13 | 188_rhodes_statue_igbo_college |
| 189 | edf - hinkley - nuclear - plant - reactor | 13 | 189_edf_hinkley_nuclear_plant |
| 190 | sweeney - church - leonard - alder - megans | 13 | 190_sweeney_church_leonard_alder |
| 191 | duterte - philippines - mindanao - dutertes - martial | 13 | 191_duterte_philippines_mindanao_dutertes |
| 192 | ferry - ship - yoo - sank - sewol | 13 | 192_ferry_ship_yoo_sank |
| 193 | norovirus - diarrhoea - hospital - virus - patient | 13 | 193_norovirus_diarrhoea_hospital_virus |
| 194 | art - arts - culture - theatre - funding | 13 | 194_art_arts_culture_theatre |
| 195 | pipeline - dakota - oil - sioux - project | 13 | 195_pipeline_dakota_oil_sioux |
| 196 | climate - temperature - warming - global - ocean | 13 | 196_climate_temperature_warming_global |
| 197 | leg - solar - piccard - impulse - borschberg | 12 | 197_leg_solar_piccard_impulse |
| 198 | gun - zimmerman - roof - fbi - shooting | 12 | 198_gun_zimmerman_roof_fbi |
| 199 | copyright - infringement - megaupload - pirated - piracy | 12 | 199_copyright_infringement_megaupload_pirated |
| 200 | bee - hive - beekeeper - honey - tunibee | 12 | 200_bee_hive_beekeeper_honey |
| 201 | bombardier - cseries - belfast - bombardiers - learjet | 11 | 201_bombardier_cseries_belfast_bombardiers |
| 202 | trudeau - canada - canadian - harper - prentice | 11 | 202_trudeau_canada_canadian_harper |
| 203 | object - reopened - evacuated - bomb - street | 11 | 203_object_reopened_evacuated_bomb |
| 204 | autism - mental - child - health - autistic | 11 | 204_autism_mental_child_health |
| 205 | regiment - lcpl - helmand - afghanistan - soldier | 11 | 205_regiment_lcpl_helmand_afghanistan |
| 206 | tunisia - attack - sousse - hotel - essid | 11 | 206_tunisia_attack_sousse_hotel |
| 207 | press - leveson - foi - ipso - newspaper | 11 | 207_press_leveson_foi_ipso |
| 208 | raf - aircraft - base - mildenhall - squadron | 11 | 208_raf_aircraft_base_mildenhall |
| 209 | language - welsh - literature - huws - meri | 11 | 209_language_welsh_literature_huws |
| 210 | concert - manchester - grande - ariana - arena | 11 | 210_concert_manchester_grande_ariana |
| 211 | lubitz - cockpit - lufthansa - copilot - germanwings | 11 | 211_lubitz_cockpit_lufthansa_copilot |
| 212 | facebook - tweet - gamergate - content - user | 10 | 212_facebook_tweet_gamergate_content |
| 213 | mine - miner - underground - fyfield - mining | 10 | 213_mine_miner_underground_fyfield |
| 214 | ira - sinn - fin - cahill - ireland | 10 | 214_ira_sinn_fin_cahill |
| 215 | gear - clarkson - hammond - show - clarksons | 10 | 215_gear_clarkson_hammond_show |
| 216 | tree - trees - felling - sheffield - diseased | 10 | 216_tree_trees_felling_sheffield |
| 217 | forbes - richest - billionaire - list - billionaires | 9 | 217_forbes_richest_billionaire_list |
| 218 | pier - structure - bewl - birnbeck - restore | 9 | 218_pier_structure_bewl_birnbeck |
| 219 | bbcscotlandpics - scotlandpicturesbbccouk - picture - selection - instagram | 9 | 219_bbcscotlandpics_scotlandpicturesbbccouk_picture_selection |
| 220 | chemical - tianjin - blast - cyanide - sodium | 9 | 220_chemical_tianjin_blast_cyanide |
| 221 | lever - ranganathan - gray - spinal - mire | 9 | 221_lever_ranganathan_gray_spinal |
| 222 | internet - icann - cac - user - china | 9 | 222_internet_icann_cac_user |
| 223 | chandelier - museum - bute - museums - abmu | 8 | 223_chandelier_museum_bute_museums |
| 224 | poultry - bird - flu - outbreak - avian | 8 | 224_poultry_bird_flu_outbreak |
| 225 | school - parent - thot - dress - circus | 8 | 225_school_parent_thot_dress |
| 226 | gambling - casino - machine - betting - machines | 8 | 226_gambling_casino_machine_betting |
| 227 | ticket - venue - theatre - ticketing - tickets | 8 | 227_ticket_venue_theatre_ticketing |
| 228 | cardiff - solstice - arriva - train - station | 8 | 228_cardiff_solstice_arriva_train |
| 229 | hacking - brooks - editor - sun - news | 7 | 229_hacking_brooks_editor_sun |
| 230 | sats - gnome - 11 - santa - cam | 7 | 230_sats_gnome_11_santa |
| 231 | robot - biomimicry - benyus - robots - robotics | 7 | 231_robot_biomimicry_benyus_robots |
| 232 | parkrun - parking - park - laugharne - charge | 7 | 232_parkrun_parking_park_laugharne |
| 233 | organ - transplant - donor - donation - optout | 7 | 233_organ_transplant_donor_donation |
| 234 | cav - bowers - ramadhan - aerospace - grills | 7 | 234_cav_bowers_ramadhan_aerospace |
| 235 | call - scotland - bilston - police - hmics | 6 | 235_call_scotland_bilston_police |
| 236 | sao - water - munduruku - tapajos - paulo | 6 | 236_sao_water_munduruku_tapajos |
| 237 | eurovision - song - contest - redzepova - entry | 6 | 237_eurovision_song_contest_redzepova |
| 238 | livingstone - antisemitism - labour - mann - comment | 6 | 238_livingstone_antisemitism_labour_mann |
| 239 | book - publishing - ebook - asi - digital | 6 | 239_book_publishing_ebook_asi |
| 240 | befriending - elaine - frsb - older - fundraising | 6 | 240_befriending_elaine_frsb_older |
| 241 | strathaven - tipper - scene - police - humbie | 6 | 241_strathaven_tipper_scene_police |
| 242 | bay - cardiff - swansea - region - investment | 5 | 242_bay_cardiff_swansea_region |
| 243 | cheese - food - outbreak - coli - flicks | 5 | 243_cheese_food_outbreak_coli |
| 244 | witheridge - miller - thai - koh - tao | 5 | 244_witheridge_miller_thai_koh |
| 245 | yorkshire - tour - depart - cycling - verity | 5 | 245_yorkshire_tour_depart_cycling |
| 246 | airline - lufthansa - franceklm - air - flight | 5 | 246_airline_lufthansa_franceklm_air |
| 247 | caffel - honourbased - forensic - warning - gill | 5 | 247_caffel_honourbased_forensic_warning |
| 248 | torreele - quebec - bissonnette - polish - boissoneault | 5 | 248_torreele_quebec_bissonnette_polish |
| 249 | lash - advert - ad - skin - asa | 5 | 249_lash_advert_ad_skin |
| 250 | fgm - girl - practice - subjected - woman | 5 | 250_fgm_girl_practice_subjected |
| 251 | parkland - wepre - heritage - margam - arnold | 5 | 251_parkland_wepre_heritage_margam |
| 252 | coal - aberfan - colliery - gedling - thoresby | 5 | 252_coal_aberfan_colliery_gedling |
| 253 | exoffenders - pupil - gwynne - yemms - school | 5 | 253_exoffenders_pupil_gwynne_yemms |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.10.12
|
KingKazma/cnn_dailymail_6789_50000_25000_v1_train
|
KingKazma
| 2023-08-19T21:43:33Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-15T20:37:58Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# cnn_dailymail_6789_50000_25000_v1_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/cnn_dailymail_6789_50000_25000_v1_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 295
* Number of training documents: 50000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - one - year - people - mr | 5 | -1_said_one_year_people |
| 0 | league - player - cup - goal - club | 25951 | 0_league_player_cup_goal |
| 1 | police - murder - shooting - shot - county | 4658 | 1_police_murder_shooting_shot |
| 2 | apple - iphone - google - user - facebook | 1101 | 2_apple_iphone_google_user |
| 3 | fashion - hair - look - dress - model | 739 | 3_fashion_hair_look_dress |
| 4 | syria - isis - syrian - iraq - islamic | 651 | 4_syria_isis_syrian_iraq |
| 5 | flight - plane - passenger - airport - aircraft | 555 | 5_flight_plane_passenger_airport |
| 6 | space - earth - mars - nasa - planet | 553 | 6_space_earth_mars_nasa |
| 7 | sex - sexual - school - victim - girl | 424 | 7_sex_sexual_school_victim |
| 8 | obama - republicans - republican - president - democrats | 420 | 8_obama_republicans_republican_president |
| 9 | hospital - cancer - baby - doctor - heart | 416 | 9_hospital_cancer_baby_doctor |
| 10 | murray - wimbledon - tennis - djokovic - federer | 362 | 10_murray_wimbledon_tennis_djokovic |
| 11 | film - movie - show - million - actor | 359 | 11_film_movie_show_million |
| 12 | china - chinese - hong - chinas - kong | 337 | 12_china_chinese_hong_chinas |
| 13 | prince - royal - duchess - queen - princess | 303 | 13_prince_royal_duchess_queen |
| 14 | property - house - price - home - estate | 299 | 14_property_house_price_home |
| 15 | ukraine - russian - russia - putin - ukrainian | 293 | 15_ukraine_russian_russia_putin |
| 16 | hamilton - rosberg - race - prix - mercedes | 279 | 16_hamilton_rosberg_race_prix |
| 17 | bear - animal - zoo - elephant - gorilla | 253 | 17_bear_animal_zoo_elephant |
| 18 | dog - animal - cat - pet - owner | 243 | 18_dog_animal_cat_pet |
| 19 | food - restaurant - drink - sugar - chef | 243 | 19_food_restaurant_drink_sugar |
| 20 | korea - north - korean - kim - koreas | 239 | 20_korea_north_korean_kim |
| 21 | mcilroy - golf - woods - pga - ryder | 229 | 21_mcilroy_golf_woods_pga |
| 22 | painting - art - artist - auction - work | 229 | 22_painting_art_artist_auction |
| 23 | weight - diet - fat - eating - size | 215 | 23_weight_diet_fat_eating |
| 24 | labour - miliband - ukip - mr - party | 198 | 24_labour_miliband_ukip_mr |
| 25 | olympic - gold - medal - games - olympics | 191 | 25_olympic_gold_medal_games |
| 26 | ship - cruise - boat - coast - crew | 188 | 26_ship_cruise_boat_coast |
| 27 | murder - stabbed - knife - police - mr | 178 | 27_murder_stabbed_knife_police |
| 28 | sudan - alshabaab - somalia - kenya - kenyan | 176 | 28_sudan_alshabaab_somalia_kenya |
| 29 | mexico - mexican - cartel - drug - border | 161 | 29_mexico_mexican_cartel_drug |
| 30 | fraud - money - court - cash - bank | 161 | 30_fraud_money_court_cash |
| 31 | iran - iranian - irans - nuclear - tehran | 158 | 31_iran_iranian_irans_nuclear |
| 32 | snow - storm - weather - tornado - inch | 153 | 32_snow_storm_weather_tornado |
| 33 | mayweather - fight - boxing - pacquiao - floyd | 153 | 33_mayweather_fight_boxing_pacquiao |
| 34 | school - education - pupil - exam - ofsted | 148 | 34_school_education_pupil_exam |
| 35 | ebola - virus - liberia - outbreak - leone | 141 | 35_ebola_virus_liberia_outbreak |
| 36 | woman - men - partner - relationship - women | 134 | 36_woman_men_partner_relationship |
| 37 | pakistan - pakistani - taliban - malala - pakistans | 132 | 37_pakistan_pakistani_taliban_malala |
| 38 | shark - whale - dolphin - fish - sea | 130 | 38_shark_whale_dolphin_fish |
| 39 | music - song - album - elvis - band | 129 | 39_music_song_album_elvis |
| 40 | israeli - israel - gaza - palestinian - hamas | 125 | 40_israeli_israel_gaza_palestinian |
| 41 | hacker - data - cyber - computer - sony | 125 | 41_hacker_data_cyber_computer |
| 42 | hotel - resort - guest - room - suite | 118 | 42_hotel_resort_guest_room |
| 43 | nhs - patient - hospital - care - patients | 112 | 43_nhs_patient_hospital_care |
| 44 | fire - blaze - smoke - firefighter - flame | 109 | 44_fire_blaze_smoke_firefighter |
| 45 | weather - rain - temperature - flood - flooding | 108 | 45_weather_rain_temperature_flood |
| 46 | mountain - climber - avalanche - climb - ski | 105 | 46_mountain_climber_avalanche_climb |
| 47 | car - vehicle - motor - engine - speed | 100 | 47_car_vehicle_motor_engine |
| 48 | nfl - rice - quarterback - goodell - patriots | 100 | 48_nfl_rice_quarterback_goodell |
| 49 | jackson - jacksons - bobbi - aeg - houston | 97 | 49_jackson_jacksons_bobbi_aeg |
| 50 | tesco - christmas - shopper - shopping - sale | 93 | 50_tesco_christmas_shopper_shopping |
| 51 | pope - vatican - francis - church - cardinal | 92 | 51_pope_vatican_francis_church |
| 52 | thailand - thai - myanmar - bangkok - cambodia | 91 | 52_thailand_thai_myanmar_bangkok |
| 53 | energy - price - gas - electricity - wind | 91 | 53_energy_price_gas_electricity |
| 54 | horse - stakes - race - racing - jockey | 90 | 54_horse_stakes_race_racing |
| 55 | chavez - venezuela - maduro - venezuelan - zelaya | 90 | 55_chavez_venezuela_maduro_venezuelan |
| 56 | snowden - nsa - intelligence - surveillance - edward | 89 | 56_snowden_nsa_intelligence_surveillance |
| 57 | lottery - jackpot - ticket - powerball - casino | 86 | 57_lottery_jackpot_ticket_powerball |
| 58 | mammoth - fossil - neanderthals - bone - human | 83 | 58_mammoth_fossil_neanderthals_bone |
| 59 | egyptian - egypt - cairo - egypts - brotherhood | 80 | 59_egyptian_egypt_cairo_egypts |
| 60 | flu - virus - vaccine - measles - strain | 80 | 60_flu_virus_vaccine_measles |
| 61 | lohan - probation - brown - lindsay - angeles | 76 | 61_lohan_probation_brown_lindsay |
| 62 | greece - greek - eurozone - euro - bailout | 76 | 62_greece_greek_eurozone_euro |
| 63 | gun - nra - newtown - background - mental | 75 | 63_gun_nra_newtown_background |
| 64 | car - driver - road - lorry - vehicle | 74 | 64_car_driver_road_lorry |
| 65 | ferguson - brown - wilson - louis - police | 73 | 65_ferguson_brown_wilson_louis |
| 66 | tsarnaev - boston - dzhokhar - marathon - bombing | 72 | 66_tsarnaev_boston_dzhokhar_marathon |
| 67 | hacking - brooks - news - coulson - murdoch | 71 | 67_hacking_brooks_news_coulson |
| 68 | saudi - arabia - dubai - arab - woman | 71 | 68_saudi_arabia_dubai_arab |
| 69 | bank - barclays - rbs - libor - bonus | 70 | 69_bank_barclays_rbs_libor |
| 70 | nazi - camp - jews - auschwitz - hitler | 70 | 70_nazi_camp_jews_auschwitz |
| 71 | afghan - afghanistan - taliban - kabul - province | 69 | 71_afghan_afghanistan_taliban_kabul |
| 72 | marriage - samesex - gay - state - couple | 67 | 72_marriage_samesex_gay_state |
| 73 | africa - african - continent - africas - kenya | 65 | 73_africa_african_continent_africas |
| 74 | libya - gadhafi - libyan - tripoli - gadhafis | 65 | 74_libya_gadhafi_libyan_tripoli |
| 75 | india - delhi - indian - rape - indias | 63 | 75_india_delhi_indian_rape |
| 76 | cuba - cuban - castro - havana - cubans | 63 | 76_cuba_cuban_castro_havana |
| 77 | roman - ancient - tomb - archaeologist - bc | 61 | 77_roman_ancient_tomb_archaeologist |
| 78 | bali - sukumaran - chan - indonesia - indonesian | 58 | 78_bali_sukumaran_chan_indonesia |
| 79 | christmas - toy - santa - tree - lego | 58 | 79_christmas_toy_santa_tree |
| 80 | train - amtrak - crash - passenger - track | 57 | 80_train_amtrak_crash_passenger |
| 81 | xbox - console - game - playstation - gaming | 55 | 81_xbox_console_game_playstation |
| 82 | tsa - airport - security - screening - passenger | 55 | 82_tsa_airport_security_screening |
| 83 | fire - wildfire - blaze - firefighter - forest | 55 | 83_fire_wildfire_blaze_firefighter |
| 84 | cancer - breast - drug - lung - prostate | 54 | 84_cancer_breast_drug_lung |
| 85 | boko - haram - nigeria - nigerian - nigerias | 52 | 85_boko_haram_nigeria_nigerian |
| 86 | turkish - turkey - erdogan - turkeys - pkk | 52 | 86_turkish_turkey_erdogan_turkeys |
| 87 | haiti - portauprince - haitian - earthquake - haitis | 51 | 87_haiti_portauprince_haitian_earthquake |
| 88 | scotland - scottish - independence - salmond - vote | 50 | 88_scotland_scottish_independence_salmond |
| 89 | rio - brazil - sao - paulo - janeiro | 50 | 89_rio_brazil_sao_paulo |
| 90 | meat - food - beef - horse - halal | 48 | 90_meat_food_beef_horse |
| 91 | zimmerman - zimmermans - trayvon - martin - george | 48 | 91_zimmerman_zimmermans_trayvon_martin |
| 92 | pirate - ship - somali - somalia - vessel | 47 | 92_pirate_ship_somali_somalia |
| 93 | eu - migrant - benefit - migration - uk | 47 | 93_eu_migrant_benefit_migration |
| 94 | soldier - corporal - helmand - afghanistan - army | 46 | 94_soldier_corporal_helmand_afghanistan |
| 95 | mandela - mandelas - south - nelson - african | 46 | 95_mandela_mandelas_south_nelson |
| 96 | pistorius - steenkamp - reeva - oscar - nel | 46 | 96_pistorius_steenkamp_reeva_oscar |
| 97 | immigration - immigrant - border - arizona - arpaio | 46 | 97_immigration_immigrant_border_arizona |
| 98 | book - novel - author - lee - mockingbird | 46 | 98_book_novel_author_lee |
| 99 | mugabe - zimbabwe - tsvangirai - zimbabwes - mugabes | 46 | 99_mugabe_zimbabwe_tsvangirai_zimbabwes |
| 100 | smoking - tobacco - cigarette - ecigarettes - smoker | 46 | 100_smoking_tobacco_cigarette_ecigarettes |
| 101 | plant - reactor - nuclear - fukushima - radiation | 46 | 101_plant_reactor_nuclear_fukushima |
| 102 | nba - lin - lebron - james - cavaliers | 44 | 102_nba_lin_lebron_james |
| 103 | guantanamo - cia - detainee - interrogation - torture | 44 | 103_guantanamo_cia_detainee_interrogation |
| 104 | curriculum - todays - transcript - feedback - student | 43 | 104_curriculum_todays_transcript_feedback |
| 105 | eu - cameron - european - referendum - brussels | 42 | 105_eu_cameron_european_referendum |
| 106 | insurance - obamacare - health - care - coverage | 42 | 106_insurance_obamacare_health_care |
| 107 | volcano - lava - eruption - ash - pahoa | 41 | 107_volcano_lava_eruption_ash |
| 108 | china - japan - chinese - japanese - japans | 41 | 108_china_japan_chinese_japanese |
| 109 | tower - trade - memorial - 911 - center | 41 | 109_tower_trade_memorial_911 |
| 110 | marijuana - cannabis - pot - drug - colorado | 41 | 110_marijuana_cannabis_pot_drug |
| 111 | war - dday - normandy - german - soldier | 40 | 111_war_dday_normandy_german |
| 112 | typhoon - manila - philippines - storm - landslide | 40 | 112_typhoon_manila_philippines_storm |
| 113 | yemen - sanaa - yemeni - drone - houthis | 39 | 113_yemen_sanaa_yemeni_drone |
| 114 | skin - sunscreen - tanning - cancer - sun | 39 | 114_skin_sunscreen_tanning_cancer |
| 115 | hasan - bales - fort - hood - soldier | 38 | 115_hasan_bales_fort_hood |
| 116 | transcript - student - news - todays - cnn | 38 | 116_transcript_student_news_todays |
| 117 | raf - pilot - aircraft - war - squadron | 37 | 117_raf_pilot_aircraft_war |
| 118 | baseball - yankees - rodriguez - mlb - pitcher | 37 | 118_baseball_yankees_rodriguez_mlb |
| 119 | earthquake - quake - magnitude - tsunami - tremor | 37 | 119_earthquake_quake_magnitude_tsunami |
| 120 | bird - squirrel - serama - duck - fox | 36 | 120_bird_squirrel_serama_duck |
| 121 | adebolajo - rigby - woolwich - lee - adebowale | 36 | 121_adebolajo_rigby_woolwich_lee |
| 122 | hernandez - hernandezs - lloyd - odin - patriots | 36 | 122_hernandez_hernandezs_lloyd_odin |
| 123 | cannabis - drug - cocaine - jailed - birmingham | 35 | 123_cannabis_drug_cocaine_jailed |
| 124 | benghazi - attack - committee - libya - ambassador | 35 | 124_benghazi_attack_committee_libya |
| 125 | abbott - gillard - minister - prime - tony | 34 | 125_abbott_gillard_minister_prime |
| 126 | weiner - leathers - black - abedin - colagiovanni | 34 | 126_weiner_leathers_black_abedin |
| 127 | oil - bp - spill - gulf - dispersants | 33 | 127_oil_bp_spill_gulf |
| 128 | crime - police - force - officer - policing | 33 | 128_crime_police_force_officer |
| 129 | miss - pageant - universe - beauty - contestant | 32 | 129_miss_pageant_universe_beauty |
| 130 | kennedy - oswald - assassination - kennedys - 1963 | 32 | 130_kennedy_oswald_assassination_kennedys |
| 131 | lanza - hook - sandy - school - newtown | 32 | 131_lanza_hook_sandy_school |
| 132 | crash - driver - driving - car - adenhart | 31 | 132_crash_driver_driving_car |
| 133 | spains - eta - spanish - madrid - spain | 31 | 133_spains_eta_spanish_madrid |
| 134 | burglary - jailed - burglar - court - crown | 30 | 134_burglary_jailed_burglar_court |
| 135 | bieber - justin - biebers - selena - singer | 30 | 135_bieber_justin_biebers_selena |
| 136 | mccann - madeleine - mccanns - madeleines - gerry | 30 | 136_mccann_madeleine_mccanns_madeleines |
| 137 | brain - anxiety - researcher - fmri - neuron | 30 | 137_brain_anxiety_researcher_fmri |
| 138 | bbc - presenter - radio - clarkson - programme | 29 | 138_bbc_presenter_radio_clarkson |
| 139 | knox - sollecito - kercher - meredith - knoxs | 29 | 139_knox_sollecito_kercher_meredith |
| 140 | cosby - drugged - cosbys - comedian - bill | 28 | 140_cosby_drugged_cosbys_comedian |
| 141 | fraternity - university - campus - student - smu | 28 | 141_fraternity_university_campus_student |
| 142 | mafia - roma - italian - italy - rancadore | 27 | 142_mafia_roma_italian_italy |
| 143 | hiv - aids - virus - infection - antiretroviral | 27 | 143_hiv_aids_virus_infection |
| 144 | berlusconi - silvio - italian - berlusconis - bunga | 27 | 144_berlusconi_silvio_italian_berlusconis |
| 145 | drone - unmanned - drones - aircraft - faa | 26 | 145_drone_unmanned_drones_aircraft |
| 146 | paris - french - hebdo - dekhar - charlie | 26 | 146_paris_french_hebdo_dekhar |
| 147 | antibiotic - infection - bacteria - antibiotics - necc | 26 | 147_antibiotic_infection_bacteria_antibiotics |
| 148 | assange - wikileaks - embassy - sweden - julian | 26 | 148_assange_wikileaks_embassy_sweden |
| 149 | twitter - abuse - online - criadoperez - bullying | 25 | 149_twitter_abuse_online_criadoperez |
| 150 | veil - blair - france - burqa - ban | 25 | 150_veil_blair_france_burqa |
| 151 | parking - yellow - council - motorist - line | 25 | 151_parking_yellow_council_motorist |
| 152 | katie - married - wedding - demi - marriage | 24 | 152_katie_married_wedding_demi |
| 153 | falklands - falkland - islands - argentina - argentine | 24 | 153_falklands_falkland_islands_argentina |
| 154 | evans - ched - sheffield - club - rape | 24 | 154_evans_ched_sheffield_club |
| 155 | branch - ambulance - died - skye - milligan | 24 | 155_branch_ambulance_died_skye |
| 156 | ford - toronto - mayor - crack - rob | 24 | 156_ford_toronto_mayor_crack |
| 157 | wedding - bride - bridesmaid - dress - couple | 24 | 157_wedding_bride_bridesmaid_dress |
| 158 | salmonella - outbreak - bacteria - contaminated - food | 24 | 158_salmonella_outbreak_bacteria_contaminated |
| 159 | climate - change - global - emission - warming | 23 | 159_climate_change_global_emission |
| 160 | anthony - caylee - anthonys - casey - baez | 23 | 160_anthony_caylee_anthonys_casey |
| 161 | philippines - philippine - ampatuan - mindanao - maguindanao | 23 | 161_philippines_philippine_ampatuan_mindanao |
| 162 | scientology - church - pastor - driscoll - miscavige | 23 | 162_scientology_church_pastor_driscoll |
| 163 | blasio - mayor - officer - batkid - nypd | 23 | 163_blasio_mayor_officer_batkid |
| 164 | froome - tour - contador - stage - cavendish | 22 | 164_froome_tour_contador_stage |
| 165 | irs - committee - issa - holder - lerner | 22 | 165_irs_committee_issa_holder |
| 166 | bergdahl - bergdahls - taliban - bowe - army | 22 | 166_bergdahl_bergdahls_taliban_bowe |
| 167 | monis - siege - cafe - lindt - haron | 22 | 167_monis_siege_cafe_lindt |
| 168 | bulger - bulgers - flemmi - martorano - whitey | 22 | 168_bulger_bulgers_flemmi_martorano |
| 169 | sri - tamil - lankan - lanka - tigers | 22 | 169_sri_tamil_lankan_lanka |
| 170 | holiday - cent - per - brits - traveller | 22 | 170_holiday_cent_per_brits |
| 171 | plant - gm - crop - food - space | 22 | 171_plant_gm_crop_food |
| 172 | paedophile - cyril - nccl - abuse - inquiry | 22 | 172_paedophile_cyril_nccl_abuse |
| 173 | sloot - der - peru - lima - peruvian | 21 | 173_sloot_der_peru_lima |
| 174 | sterling - stiviano - nba - clippers - sterlings | 21 | 174_sterling_stiviano_nba_clippers |
| 175 | breivik - utoya - oslo - breiviks - norway | 21 | 175_breivik_utoya_oslo_breiviks |
| 176 | alcohol - drinking - liver - drink - gastroenterologist | 21 | 176_alcohol_drinking_liver_drink |
| 177 | asylum - seeker - nauru - refugee - manus | 20 | 177_asylum_seeker_nauru_refugee |
| 178 | kennedy - kennedys - mary - robert - jr | 20 | 178_kennedy_kennedys_mary_robert |
| 179 | gascoigne - aiden - ghost - school - poole | 20 | 179_gascoigne_aiden_ghost_school |
| 180 | russian - adoption - russia - child - adopted | 20 | 180_russian_adoption_russia_child |
| 181 | reveller - event - night - carnage - drinking | 20 | 181_reveller_event_night_carnage |
| 182 | armstrong - doping - armstrongs - usada - antidoping | 19 | 182_armstrong_doping_armstrongs_usada |
| 183 | derick - birth - zoey - bianca - steph | 19 | 183_derick_birth_zoey_bianca |
| 184 | strike - union - unite - rmt - tube | 19 | 184_strike_union_unite_rmt |
| 185 | va - veteran - veterans - shinseki - phoenix | 19 | 185_va_veteran_veterans_shinseki |
| 186 | immigration - reform - immigrant - obama - republicans | 19 | 186_immigration_reform_immigrant_obama |
| 187 | ira - belfast - ireland - northern - bomb | 18 | 187_ira_belfast_ireland_northern |
| 188 | council - garden - rubbish - neighbour - knotweed | 18 | 188_council_garden_rubbish_neighbour |
| 189 | sinclair - sexual - assault - military - sinclairs | 18 | 189_sinclair_sexual_assault_military |
| 190 | sandusky - penn - paterno - sanduskys - state | 18 | 190_sandusky_penn_paterno_sanduskys |
| 191 | gay - russia - russian - sochi - propaganda | 18 | 191_gay_russia_russian_sochi |
| 192 | trierweiler - hollande - gayet - valerie - hollandes | 18 | 192_trierweiler_hollande_gayet_valerie |
| 193 | bosnian - srebrenica - mladic - serb - serbian | 18 | 193_bosnian_srebrenica_mladic_serb |
| 194 | calais - migrant - lorry - port - illegal | 18 | 194_calais_migrant_lorry_port |
| 195 | drug - ecstasy - wyvell - methadone - death | 17 | 195_drug_ecstasy_wyvell_methadone |
| 196 | circumcision - fgm - genital - mutilation - circumcised | 17 | 196_circumcision_fgm_genital_mutilation |
| 197 | mine - miner - coal - rescue - mining | 17 | 197_mine_miner_coal_rescue |
| 198 | christie - christies - wildstein - jersey - governor | 17 | 198_christie_christies_wildstein_jersey |
| 199 | rice - coach - rutgers - basketball - ware | 17 | 199_rice_coach_rutgers_basketball |
| 200 | breach - card - credit - data - target | 17 | 200_breach_card_credit_data |
| 201 | alzheimers - brain - study - stress - disease | 17 | 201_alzheimers_brain_study_stress |
| 202 | hurricane - storm - parish - tropical - rain | 17 | 202_hurricane_storm_parish_tropical |
| 203 | indias - india - delhi - modi - hazare | 17 | 203_indias_india_delhi_modi |
| 204 | robot - asimo - robotics - robots - daler | 16 | 204_robot_asimo_robotics_robots |
| 205 | tree - trees - cherry - bonsai - ash | 16 | 205_tree_trees_cherry_bonsai |
| 206 | tattoo - tattooing - tattoos - tattooed - inked | 16 | 206_tattoo_tattooing_tattoos_tattooed |
| 207 | tax - osborne - 40p - rate - chancellor | 16 | 207_tax_osborne_40p_rate |
| 208 | mieses - bikers - crash - driver - lien | 16 | 208_mieses_bikers_crash_driver |
| 209 | petraeus - broadwell - kelley - humphries - affair | 16 | 209_petraeus_broadwell_kelley_humphries |
| 210 | wars - star - scifi - darth - film | 16 | 210_wars_star_scifi_darth |
| 211 | dancing - ballet - pole - dance - dancer | 16 | 211_dancing_ballet_pole_dance |
| 212 | church - archbishop - bishop - anglican - sentamu | 16 | 212_church_archbishop_bishop_anglican |
| 213 | sotomayor - justice - ginsburg - voter - supreme | 15 | 213_sotomayor_justice_ginsburg_voter |
| 214 | statin - aspirin - yeast - supplement - risk | 15 | 214_statin_aspirin_yeast_supplement |
| 215 | road - driver - cent - traffic - aa | 15 | 215_road_driver_cent_traffic |
| 216 | dewani - anni - shrien - dewanis - mngeni | 15 | 216_dewani_anni_shrien_dewanis |
| 217 | poverty - income - homeless - homelessness - poor | 15 | 217_poverty_income_homeless_homelessness |
| 218 | sharper - kolstad - stallworth - nfl - mcnabb | 15 | 218_sharper_kolstad_stallworth_nfl |
| 219 | ice - climate - antarctic - greenland - warming | 15 | 219_ice_climate_antarctic_greenland |
| 220 | jerusalem - temple - ancient - hebrew - jewish | 14 | 220_jerusalem_temple_ancient_hebrew |
| 221 | veteran - veterans - cemetery - memorial - war | 14 | 221_veteran_veterans_cemetery_memorial |
| 222 | li - teacher - school - china - province | 14 | 222_li_teacher_school_china |
| 223 | postal - mail - tnt - royal - stamp | 14 | 223_postal_mail_tnt_royal |
| 224 | spanish - spain - gibraltar - morocco - spains | 14 | 224_spanish_spain_gibraltar_morocco |
| 225 | gonzalez - white - secret - fence - house | 14 | 225_gonzalez_white_secret_fence |
| 226 | raid - store - shop - cash - theft | 13 | 226_raid_store_shop_cash |
| 227 | laden - bin - al - qaeda - attack | 13 | 227_laden_bin_al_qaeda |
| 228 | strausskahn - diallo - dominique - imf - strausskahns | 13 | 228_strausskahn_diallo_dominique_imf |
| 229 | konrardy - nygaard - olsen - berk - marine | 13 | 229_konrardy_nygaard_olsen_berk |
| 230 | adoption - gammy - gebregeorgis - surrogacy - thai | 13 | 230_adoption_gammy_gebregeorgis_surrogacy |
| 231 | cruise - illness - ill - outbreak - sickness | 13 | 231_cruise_illness_ill_outbreak |
| 232 | robertson - duck - dynasty - ae - phil | 12 | 232_robertson_duck_dynasty_ae |
| 233 | occupy - protester - wall - protest - demonstrator | 12 | 233_occupy_protester_wall_protest |
| 234 | rate - abortion - pregnancy - birth - teen | 12 | 234_rate_abortion_pregnancy_birth |
| 235 | alhilli - saad - mollier - alhillis - zaid | 12 | 235_alhilli_saad_mollier_alhillis |
| 236 | crash - scene - minibus - accident - davies | 12 | 236_crash_scene_minibus_accident |
| 237 | hollande - sarkozy - hollandes - socialist - pen | 12 | 237_hollande_sarkozy_hollandes_socialist |
| 238 | porn - filter - pornography - internet - iplayer | 12 | 238_porn_filter_pornography_internet |
| 239 | 3d - printer - printing - thermomix - print | 12 | 239_3d_printer_printing_thermomix |
| 240 | penguin - ness - loch - nessie - wildlife | 12 | 240_penguin_ness_loch_nessie |
| 241 | reef - coral - marine - stoupin - corals | 11 | 241_reef_coral_marine_stoupin |
| 242 | spider - insect - beetle - frog - spiders | 11 | 242_spider_insect_beetle_frog |
| 243 | bletchley - enigma - war - turing - code | 11 | 243_bletchley_enigma_war_turing |
| 244 | pollution - air - smog - beijing - quality | 11 | 244_pollution_air_smog_beijing |
| 245 | parachute - dause - ernie - ebbrell - jump | 10 | 245_parachute_dause_ernie_ebbrell |
| 246 | immigration - deportation - sham - iwueke - tate | 10 | 246_immigration_deportation_sham_iwueke |
| 247 | harris - rolf - indecent - 5480 - 4481 | 10 | 247_harris_rolf_indecent_5480 |
| 248 | factory - garment - bangladesh - dhaka - bangladeshi | 10 | 248_factory_garment_bangladesh_dhaka |
| 249 | nobel - prize - peace - karman - gbowee | 10 | 249_nobel_prize_peace_karman |
| 250 | ferry - sewol - jeju - ship - yoo | 10 | 250_ferry_sewol_jeju_ship |
| 251 | manson - atkins - tate - parole - statman | 10 | 251_manson_atkins_tate_parole |
| 252 | toyota - recall - toyotas - vehicle - acceleration | 9 | 252_toyota_recall_toyotas_vehicle |
| 253 | mortgage - rate - bank - cent - per | 9 | 253_mortgage_rate_bank_cent |
| 254 | smedley - rigby - ruth - coit - quesada | 9 | 254_smedley_rigby_ruth_coit |
| 255 | afghanistan - afghan - troop - karzai - abdullah | 9 | 255_afghanistan_afghan_troop_karzai |
| 256 | frozen - disney - elsa - cinderella - princess | 9 | 256_frozen_disney_elsa_cinderella |
| 257 | driving - wilkins - waller - magistrates - drinkdriving | 9 | 257_driving_wilkins_waller_magistrates |
| 258 | olympic - games - olympics - ceremony - london | 9 | 258_olympic_games_olympics_ceremony |
| 259 | neolithic - skull - timber - reitan - buried | 8 | 259_neolithic_skull_timber_reitan |
| 260 | philpott - mairead - willis - mick - fire | 8 | 260_philpott_mairead_willis_mick |
| 261 | holmes - clements - theater - colorado - aurora | 8 | 261_holmes_clements_theater_colorado |
| 262 | explosion - plant - fire - blast - fertilizer | 8 | 262_explosion_plant_fire_blast |
| 263 | tokyo - games - olympic - ioc - sochi | 8 | 263_tokyo_games_olympic_ioc |
| 264 | abortion - lobby - hobby - religious - supreme | 8 | 264_abortion_lobby_hobby_religious |
| 265 | cece - tulisa - cheryl - elimination - lakoda | 8 | 265_cece_tulisa_cheryl_elimination |
| 266 | dubai - mme - sheikh - uae - maktoum | 7 | 266_dubai_mme_sheikh_uae |
| 267 | space - virgin - galactic - spaceshiptwo - branson | 7 | 267_space_virgin_galactic_spaceshiptwo |
| 268 | oshie - hockey - shootout - russia - wagner | 7 | 268_oshie_hockey_shootout_russia |
| 269 | moghadam - avalos - image - chaney - nude | 7 | 269_moghadam_avalos_image_chaney |
| 270 | vell - roache - stuartcole - coronation - soap | 7 | 270_vell_roache_stuartcole_coronation |
| 271 | uber - taxi - hailo - driver - company | 7 | 271_uber_taxi_hailo_driver |
| 272 | mcdaniel - boo - mama - anna - honey | 6 | 272_mcdaniel_boo_mama_anna |
| 273 | rail - crossing - badauskas - train - minnis | 6 | 273_rail_crossing_badauskas_train |
| 274 | belghar - shafi - mevish - munir - ahmed | 6 | 274_belghar_shafi_mevish_munir |
| 275 | fred - knapke - hodgkins - carole - liam | 6 | 275_fred_knapke_hodgkins_carole |
| 276 | poppy - tower - war - memorial - ceramic | 6 | 276_poppy_tower_war_memorial |
| 277 | chiquita - colombia - colombian - cabral - marijuana | 6 | 277_chiquita_colombia_colombian_cabral |
| 278 | tb - virus - infection - measles - kalis | 6 | 278_tb_virus_infection_measles |
| 279 | sloan - saldanha - care - alvarez - saldanhas | 6 | 279_sloan_saldanha_care_alvarez |
| 280 | airboard - skyflash - hoverbike - catapult - skyprowler | 6 | 280_airboard_skyflash_hoverbike_catapult |
| 281 | ciancia - tsa - airport - hernandez - gerardo | 6 | 281_ciancia_tsa_airport_hernandez |
| 282 | heroin - addiction - opioids - addict - drug | 6 | 282_heroin_addiction_opioids_addict |
| 283 | euthanasia - pathway - assisted - die - suicide | 6 | 283_euthanasia_pathway_assisted_die |
| 284 | tower - elevator - lagoon - dubai - skyscraper | 6 | 284_tower_elevator_lagoon_dubai |
| 285 | firouzian - bus - tan - king - luther | 6 | 285_firouzian_bus_tan_king |
| 286 | carolyn - ian - fleming - morpurgo - couple | 5 | 286_carolyn_ian_fleming_morpurgo |
| 287 | tunisia - arab - egypt - tunisian - friaa | 5 | 287_tunisia_arab_egypt_tunisian |
| 288 | al - qaeda - libi - bin - laden | 5 | 288_al_qaeda_libi_bin |
| 289 | ear - keim - hear - implant - charlotte | 5 | 289_ear_keim_hear_implant |
| 290 | busch - driscoll - nascar - stewart - ward | 5 | 290_busch_driscoll_nascar_stewart |
| 291 | driscoll - masked - auckland - mortar - facebook | 5 | 291_driscoll_masked_auckland_mortar |
| 292 | drawer - bevan - avon - rothwell - leake | 5 | 292_drawer_bevan_avon_rothwell |
| 293 | breastfeeding - milk - clowes - breast - pump | 5 | 293_breastfeeding_milk_clowes_breast |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.10.12
|
MattStammers/dqn-BreakoutNoFrameskip-v4
|
MattStammers
| 2023-08-19T21:25:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-19T21:24:12Z |
---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 220.60 +/- 71.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga MattStammers
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SuaveDesciple/Phoneguyfnaf
|
SuaveDesciple
| 2023-08-19T21:19:25Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-19T21:18:31Z |
---
license: bigscience-openrail-m
---
|
YCHuang2112/poca-SoccerTwos
|
YCHuang2112
| 2023-08-19T21:06:18Z | 50 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"ML-Agents-SoccerTwos",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2023-08-17T19:25:25Z |
---
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YCHuang2112/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aratshimyanga/dqn-SpaceInvadersNoFrameskip-v4
|
aratshimyanga
| 2023-08-19T20:52:59Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-19T20:52:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 624.00 +/- 266.61
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aratshimyanga -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aratshimyanga -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga aratshimyanga
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SuaveDesciple/Jojo
|
SuaveDesciple
| 2023-08-19T20:45:01Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-19T20:39:34Z |
---
license: bigscience-openrail-m
---
|
VicBeltran/a2c-PandaReachDense-v3
|
VicBeltran
| 2023-08-19T20:33:13Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-19T19:43:35Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
edwsiew/setfit-finetuned-tech-sentiment-setfit-16-30-1
|
edwsiew
| 2023-08-19T20:30:33Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-19T20:30:13Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# edwsiew/setfit-finetuned-tech-sentiment-setfit-16-30-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("edwsiew/setfit-finetuned-tech-sentiment-setfit-16-30-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Henil1/mt5-small-hindi-summary-hindi-summary
|
Henil1
| 2023-08-19T20:23:50Z | 66 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-19T19:59:11Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: Henil1/mt5-small-hindi-summary-hindi-summary
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Henil1/mt5-small-hindi-summary-hindi-summary
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 13806, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| nan | nan | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Viktoria1178/Jasmine
|
Viktoria1178
| 2023-08-19T20:17:38Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:roneneldan/TinyStories",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-19T19:57:23Z |
---
license: bigscience-openrail-m
datasets:
- fka/awesome-chatgpt-prompts
- roneneldan/TinyStories
metrics:
- code_eval
library_name: adapter-transformers
---
|
amirhamza11/mBart-large_nwp_finetuning_test3
|
amirhamza11
| 2023-08-19T20:13:53Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-19T19:34:05Z |
---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: mBart-large_nwp_finetuning_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBart-large_nwp_finetuning_test3
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0028
- eval_runtime: 2.4878
- eval_samples_per_second: 209.418
- eval_steps_per_second: 26.529
- epoch: 5.0
- step: 2980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.