|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- feature-extraction |
|
language: |
|
- en |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
|
|
# `wikipedia_en` |
|
|
|
This is a curated Wikipedia English dataset for use with the [II-Commons](https://github.com/Intelligent-Internet/II-Commons) project. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
This dataset comprises a curated Wikipedia English pages. Data sourced directly from the official English Wikipedia database dump. We extract the pages, chunk them into smaller pieces, and embed them using [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0). All vector embeddings are 16-bit half-precision vectors optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord). |
|
|
|
### Dataset Sources |
|
|
|
Based on the [wikipedia dumps](https://dumps.wikimedia.org/). Please check this page for the [LICENSE](https://dumps.wikimedia.org/legal.html) of the page data. |
|
|
|
## Dataset Structure |
|
|
|
1. Metadata Table |
|
|
|
- id: A unique identifier for the page. |
|
- revid: The revision ID of the page. |
|
- url: The URL of the page. |
|
- title: The title of the page. |
|
- ignored: Whether the page is ignored. |
|
- created_at: The creation time of the page. |
|
- updated_at: The update time of the page. |
|
|
|
2. Chunking Table |
|
|
|
- id: A unique identifier for the chunk. |
|
- title: The title of the page. |
|
- url: The URL of the page. |
|
- source_id: The source ID of the page. |
|
- chunk_index: The index of the chunk. |
|
- chunk_text: The text of the chunk. |
|
- vector: The vector embedding of the chunk. |
|
- created_at: The creation time of the chunk. |
|
- updated_at: The update time of the chunk. |
|
|
|
## Prerequisite |
|
|
|
PostgreSQL 17 with extensions: [vectorchord](https://github.com/tensorchord/VectorChord) and [pg_search](https://github.com/paradedb/paradedb/tree/dev/pg_search) |
|
|
|
The easiest way is to use our [Docker image](https://github.com/Intelligent-Internet/II-Commons/tree/main/examples/db), or build your own. Then load the [psql_basebackup](https://huggingface.co/datasets/Intelligent-Internet/wikipedia_en/tree/psql_basebackup) branch, following the [Quick Start](https://github.com/Intelligent-Internet/II-Commons?tab=readme-ov-file#quick-start) |
|
|
|
Ensure extensions are enabled, connect to the database using the psql, and run the following SQL: |
|
|
|
```sql |
|
CREATE EXTENSION IF NOT EXISTS vchord CASCADE; |
|
CREATE EXTENSION IF NOT EXISTS pg_search CASCADE; |
|
``` |
|
|
|
## Uses |
|
|
|
This dataset is available for a wide range of applications. |
|
|
|
Here is a demo of how to use the dataset with [II-Commons](https://github.com/Intelligent-Internet/II-Commons). |
|
|
|
### Create the metadata and chunking tables in PostgreSQL |
|
|
|
```sql |
|
CREATE TABLE IF NOT EXISTS ts_wikipedia_en ( |
|
id BIGSERIAL PRIMARY KEY, |
|
revid BIGINT NOT NULL, |
|
url VARCHAR NOT NULL, |
|
title VARCHAR NOT NULL DEFAULT '', |
|
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, |
|
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, |
|
ignored BOOLEAN NOT NULL DEFAULT FALSE |
|
); |
|
|
|
CREATE TABLE IF NOT EXISTS ts_wikipedia_en_embed ( |
|
id BIGSERIAL PRIMARY KEY, |
|
title VARCHAR NOT NULL, |
|
url VARCHAR NOT NULL, |
|
chunk_index BIGINT NOT NULL, |
|
chunk_text VARCHAR NOT NULL, |
|
source_id BIGINT NOT NULL, |
|
vector halfvec(768) DEFAULT NULL, |
|
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, |
|
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP |
|
); |
|
``` |
|
|
|
### Load csv files to database |
|
|
|
1. Load the dataset from local file system to a remote PostgreSQL server: |
|
|
|
```sql |
|
\copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER; |
|
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000000.csv' CSV HEADER; |
|
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000001.csv' CSV HEADER; |
|
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000002.csv' CSV HEADER; |
|
... |
|
``` |
|
|
|
2. Load the dataset from the PostgreSQL server's file system: |
|
|
|
```sql |
|
copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER; |
|
copy ts_wikipedia_en_embed FROM 'data/chunks/0000000.csv' CSV HEADER; |
|
copy ts_wikipedia_en_embed FROM 'data/chunks/0000001.csv' CSV HEADER; |
|
copy ts_wikipedia_en_embed FROM 'data/chunks/0000002.csv' CSV HEADER; |
|
... |
|
``` |
|
|
|
### Create Indexes |
|
|
|
You need to create the following indexes for the best performance. |
|
|
|
The `vector` column is a halfvec(768) column, which is a 16-bit half-precision vector optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord). You can get more information about the vector index from the [vectorchord](https://docs.vectorchord.ai/vectorchord/usage/indexing.html) documentation. |
|
|
|
1. Create the metadata table index: |
|
|
|
```sql |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_revid_index ON ts_wikipedia_en (revid); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_url_index ON ts_wikipedia_en (url); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_title_index ON ts_wikipedia_en (title); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_ignored_index ON ts_wikipedia_en (ignored); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_created_at_index ON ts_wikipedia_en (created_at); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_updated_at_index ON ts_wikipedia_en (updated_at); |
|
``` |
|
2. Create the chunking table index: |
|
|
|
```sql |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_id_index ON ts_wikipedia_en_embed (source_id); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_index_index ON ts_wikipedia_en_embed (chunk_index); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_text_index ON ts_wikipedia_en_embed USING bm25 (id, title, chunk_text) WITH (key_field='id'); |
|
CREATE UNIQUE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_index ON ts_wikipedia_en_embed (source_id, chunk_index); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_index ON ts_wikipedia_en_embed USING vchordrq (vector halfvec_cosine_ops) WITH (options = $$ |
|
[build.internal] |
|
lists = [20000] |
|
build_threads = 6 |
|
spherical_centroids = true |
|
$$); |
|
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_null_index ON ts_wikipedia_en_embed (vector) WHERE vector IS NULL; |
|
SELECT vchordrq_prewarm('ts_wikipedia_en_embed_vector_index'); |
|
``` |
|
|
|
### Query with II-Commons |
|
|
|
Click this link to learn how to query the dataset with [II-Commons](https://github.com/Intelligent-Internet/II-Commons). |
|
|