metadata
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 37322589
num_examples: 10000
download_size: 40747210
dataset_size: 37322589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
This dummy dataset is used for testing purpose for rag
model in transformers
. It is proudced via the following steps:
dataset = datasets.load_dataset("wiki_dpr", with_embeddings=True, with_index=True, index_name="exact", embeddings_name="nq", dummy=True, revision=None)
dataset["train"].drop_index("embeddings")
dataset.push_to_hub("hf-internal-testing/wiki_dpr_dummy", token="...")
The index file `index.faiss` (after being renamed locally) is then uploaded manually.