Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
id: string
pdf_bytes: large_binary
sha256: string
to
{'id': Value('string'), 'info': Value('string'), 'data': Value('large_string'), 'tags': List(Value('string'))}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  for key, table in generator:
                                    ^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
                  for item in generator(*args, **kwargs):
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/lance/lance.py", line 230, in _generate_tables
                  yield Key(frag_idx, batch_idx), self._cast_table(table)
                                                  ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/lance/lance.py", line 187, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: string
              pdf_bytes: large_binary
              sha256: string
              to
              {'id': Value('string'), 'info': Value('string'), 'data': Value('large_string'), 'tags': List(Value('string'))}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1925, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

id
string
pdf_bytes
unknown
sha256
string
fa6cef22fdb0ab87f010a7da3eae5d0646da0d87577ef2b744bdf2d6e69413ab
"JVBERi0xLjYNCiWhs8XXDQoyIDAgb2JqDQo8PC9CbGVlZEJveFsgMCAwIDQzMiA2NDhdL0NvbnRlbnRzIDMgMCBSIC9Dcm9wQm9(...TRUNCATED)
fa6cef22fdb0ab87f010a7da3eae5d0646da0d87577ef2b744bdf2d6e69413ab
6f89a0523331201eefdd4b22b2fd64d443a114ed964639f2e6a917d6c2f01ea7
"JVBERi0xLjYKJeLjz9MKMSAwIG9iajw8L01ldGFkYXRhIDIgMCBSL091dGxpbmVzIDMgMCBSL1BhZ2VMYWJlbHMgNCAwIFIvUGF(...TRUNCATED)
6f89a0523331201eefdd4b22b2fd64d443a114ed964639f2e6a917d6c2f01ea7
28fea23d1ce13c99ca0efcf2993af971ad4fcf1be5512b6c1eb0442324d351e4
"JVBERi0xLjUKJeLjz9MKMyAwIG9iago8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDQ2Pj5zdHJlYW0KSIkq5NJ3DjZQSC5(...TRUNCATED)
28fea23d1ce13c99ca0efcf2993af971ad4fcf1be5512b6c1eb0442324d351e4
f16352b5774db50aed5cdd2e742749fe253a632977f7c6fdebaee958b66e839c
"JVBERi0xLjUKJeLjz9MKMyAwIG9iago8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDE3NT4+c3RyZWFtCkiJRIzBCsIwEET(...TRUNCATED)
f16352b5774db50aed5cdd2e742749fe253a632977f7c6fdebaee958b66e839c
affd29f20a50c3ee32b5296c3d2d6b92260dc9b438a70511bbd4c87bdb32ef14
"JVBERi0xLjYKJeLjz9MKMyAwIG9iago8PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDMwMT4+c3RyZWFtCnicZY9PT8MwDMX(...TRUNCATED)
affd29f20a50c3ee32b5296c3d2d6b92260dc9b438a70511bbd4c87bdb32ef14
e94195c84f4d664f98174912561395ab5c538d88cb98a0ba5c49e2e52a548508
"JVBERi0xLjYNJeLjz9MNCjExNjYgMCBvYmoNPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0ZpcnN0IDI1L0xlbmd0aCA2NzQvTiAzL1R(...TRUNCATED)
e94195c84f4d664f98174912561395ab5c538d88cb98a0ba5c49e2e52a548508
0a9f557f84b55234e6ae2a558a4fbbe4150ee5fcb7181f7f852094d5c83f3195
"JVBERi0xLjQKJeLjz9MKMSAwIG9iajw8L01ldGFkYXRhIDIgMCBSL1BhZ2VMYWJlbHMgMyAwIFIvUGFnZXMgNCAwIFIvVHlwZS9(...TRUNCATED)
0a9f557f84b55234e6ae2a558a4fbbe4150ee5fcb7181f7f852094d5c83f3195
1682373422f10654bbd1899c7bbf7f82d3d301b11d40bd4ee5e26a0294f98fb7
"JVBERi0xLjUKJeLjz9MKMSAwIG9iajw8L1R5cGUvUGFnZS9Db250ZW50cyAyIDAgUi9QYXJlbnQgMjE3MiAwIFIvUmVzb3VyY2V(...TRUNCATED)
1682373422f10654bbd1899c7bbf7f82d3d301b11d40bd4ee5e26a0294f98fb7
51f29bc0754b6cccbc9180cd50c79836cad2331216ca9dd7d8cd88902fe264f8
"JVBERi0xLjUKJeLjz9MKMSAwIG9iajw8L1BhZ2VzIDIgMCBSL091dGxpbmVzIDMgMCBSL1R5cGUvQ2F0YWxvZy9QYWdlTGFiZWx(...TRUNCATED)
51f29bc0754b6cccbc9180cd50c79836cad2331216ca9dd7d8cd88902fe264f8
8392682c51643e4676e790ed2631128b59ac0c79ebe5ff1153284279e4d07dbc
"JVBERi0xLjUKJeLjz9MKMSAwIG9iajw8L091dGxpbmVzIDIgMCBSL01ldGFkYXRhIDMgMCBSL1BhZ2VzIDQgMCBSL1R5cGUvQ2F(...TRUNCATED)
8392682c51643e4676e790ed2631128b59ac0c79ebe5ff1153284279e4d07dbc
End of preview.

DCD arXiv Parse Demo Dataset

arxiv、libgen、xiaoyi 等来源的原始 PDF 和 MinerU解析结果示例数据集,以 LanceDB 格式存储。

每个数据源包含五个 LanceDB 数据集及一份平台元数据文件:

数据集 / 文件 内容
pdfs.lance 原始 PDF 字节与可选追加的 layout PDF(idpdf_bytessha256
pdf_labels.lance PDF 侧元数据与标注(idinfo JSON、tags
text.lance MinerU Markdown 正文与 infopdf_idsimage_ids 等)
images.lance 图片原始字节(id / sha256 为图片字节摘要)
image_labels.lance 图像内禀字段(info 含宽高、通道、字节数)
dataset.yaml DCD 用数据集元数据

注意:各 *.lance 均为 目录,不是单个文件,这是 LanceDB 的存储格式。

数据集概览

来源 条数(参考) 原始批次
arxiv 762 arxiv_20251203
libgen 88 libgen_scihub_1026_all_0930
xiaoyi 436 knowledge_xiaoyi_books_all_1114_3_2M

仓库结构

dcd-pdf-parse-demo/
├── arxiv/
├── libgen/
└── xiaoyi/
    (各子目录下结构相同)
    ├── pdfs.lance/          ← 原始 PDF + layout(LanceDB 目录)
    ├── pdf_labels.lance/    ← 元数据与标注(LanceDB 目录)
    ├── text.lance/          ← Markdown 正文(LanceDB 目录)
    ├── images.lance/        ← 图片字节(LanceDB 目录)
    ├── image_labels.lance/  ← 图像标注(LanceDB 目录)
    └── dataset.yaml         ← 数据集元数据

直接访问(无需下载全部数据)

pip install lance huggingface_hub
import lance

# 按需拉取;体积随图片与 PDF 变化,可按表分别打开
base = "hf://datasets/KuoKuoYeah/dcd-pdf-parse-demo/arxiv"

pdfs = lance.dataset(f"{base}/pdfs.lance")
print(pdfs.count_rows())
sample = pdfs.take([0, 1, 2]).to_pydict()

labels = lance.dataset(f"{base}/pdf_labels.lance")
print(labels.schema)

text_ds = lance.dataset(f"{base}/text.lance")
print(text_ds.count_rows())

images = lance.dataset(f"{base}/images.lance")
img_labels = lance.dataset(f"{base}/image_labels.lance")

完整下载后本地访问

from huggingface_hub import snapshot_download
import lance

local_dir = snapshot_download(
    repo_id="KuoKuoYeah/dcd-pdf-parse-demo",
    repo_type="dataset",
    local_dir="/tmp/dcd-pdf-parse-demo",
)

source = "arxiv"
root = f"{local_dir}/{source}"
pdfs = lance.dataset(f"{root}/pdfs.lance")
labels = lance.dataset(f"{root}/pdf_labels.lance")
text_ds = lance.dataset(f"{root}/text.lance")
images = lance.dataset(f"{root}/images.lance")
img_labels = lance.dataset(f"{root}/image_labels.lance")

注册到 DCD 平台可视化(内部用户)

# 1. 下载数据到本地
huggingface-cli download KuoKuoYeah/dcd-pdf-parse-demo \
  --repo-type dataset --local-dir /tmp/dcd-pdf-parse-demo

# 2. 在 DCD 的 datasets/ 目录下创建符号链接
ln -s /tmp/dcd-pdf-parse-demo/arxiv /path/to/adp/datasets/arxiv-parse-demo

启动 DCD 服务器后访问 http://localhost:8000/datasets/arxiv-parse-demo

Downloads last month
3,200