Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
dataset_id: string
format: string
metadata: string
records: int64
residues: int64
sequence_dir: string
shards: int64
source_file: string
source_slug: string
source_count: int64
sources: list<item: struct<dataset_id: string, format: string, metadata: string, records: int64, residues: in (... 84 chars omitted)
  child 0, item: struct<dataset_id: string, format: string, metadata: string, records: int64, residues: int64, sequen (... 72 chars omitted)
      child 0, dataset_id: string
      child 1, format: string
      child 2, metadata: string
      child 3, records: int64
      child 4, residues: int64
      child 5, sequence_dir: string
      child 6, shards: int64
      child 7, source_file: string
      child 8, source_slug: string
total_records: int64
total_shards: int64
total_residues: int64
to
{'dataset_id': Value('string'), 'format': Value('string'), 'source_count': Value('int64'), 'sources': List({'dataset_id': Value('string'), 'format': Value('string'), 'metadata': Value('string'), 'records': Value('int64'), 'residues': Value('int64'), 'sequence_dir': Value('string'), 'shards': Value('int64'), 'source_file': Value('string'), 'source_slug': Value('string')}), 'total_records': Value('int64'), 'total_residues': Value('int64'), 'total_shards': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              dataset_id: string
              format: string
              metadata: string
              records: int64
              residues: int64
              sequence_dir: string
              shards: int64
              source_file: string
              source_slug: string
              source_count: int64
              sources: list<item: struct<dataset_id: string, format: string, metadata: string, records: int64, residues: in (... 84 chars omitted)
                child 0, item: struct<dataset_id: string, format: string, metadata: string, records: int64, residues: int64, sequen (... 72 chars omitted)
                    child 0, dataset_id: string
                    child 1, format: string
                    child 2, metadata: string
                    child 3, records: int64
                    child 4, residues: int64
                    child 5, sequence_dir: string
                    child 6, shards: int64
                    child 7, source_file: string
                    child 8, source_slug: string
              total_records: int64
              total_shards: int64
              total_residues: int64
              to
              {'dataset_id': Value('string'), 'format': Value('string'), 'source_count': Value('int64'), 'sources': List({'dataset_id': Value('string'), 'format': Value('string'), 'metadata': Value('string'), 'records': Value('int64'), 'residues': Value('int64'), 'sequence_dir': Value('string'), 'shards': Value('int64'), 'source_file': Value('string'), 'source_slug': Value('string')}), 'total_records': Value('int64'), 'total_residues': Value('int64'), 'total_shards': Value('int64')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

UniRef90

Normalized FASTA shards of UniRef90 cluster representative sequences (90% sequence identity threshold).

Processed and uploaded by the MegaData post-download pipeline (internal repo). Original source: https://www.uniprot.org/help/uniref.

Statistics

Source files 1
Shards 189
Compressed shard bytes 38.65 GiB (41,498,595,315)
Records (per-source manifest sum) 188,848,220
Residues (per-source manifest sum) 66,359,825,357
Aggregate manifest total_records 188848220

Layout

.
├── _MANIFEST.json                      # aggregate manifest written by the pipeline
├── manifests/<source_slug>.json        # per-source manifest (records, residues, shards)
├── metadata/<source_slug>.records.jsonl  # per-record provenance
└── sequences/<source_slug>/shard-NNNNNN.fasta.zst

<source_slug> corresponds 1:1 with an upstream source archive; e.g. sequence_uniprotkb_uniprot_sprot.fasta.gz.

Loading

Stream every shard of one source (replace <source_slug> with the directory of interest under sequences/):

hf download LiteFold/UniRef90 --repo-type dataset \
  --include 'sequences/<source_slug>/shard-*.fasta.zst' \
  --local-dir ./uniref90
zstd -dc ./uniref90/sequences/<source_slug>/shard-*.fasta.zst | head

Programmatic streaming with zstandard:

from huggingface_hub import snapshot_download
from pathlib import Path
import zstandard as zstd

local = snapshot_download(
    repo_id="LiteFold/UniRef90",
    repo_type="dataset",
    allow_patterns=["sequences/*/shard-*.fasta.zst"],
)

dctx = zstd.ZstdDecompressor()
for shard in sorted(Path(local).rglob("shard-*.fasta.zst")):
    with shard.open("rb") as f, dctx.stream_reader(f) as reader:
        buf = b""
        while chunk := reader.read(1 << 20):
            buf += chunk
            *lines, buf = buf.split(b"\n")
            for line in lines:
                ...  # naive splitter; swap in your FASTA parser

License

CC BY 4.0 (UniProt Consortium).

Citation

Suzek BE, Wang Y, Huang H, McGarvey PB, Wu CH; UniProt Consortium. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics, 31(6):926-32, 2015.

Provenance

Built from the local manifest entry uniref90 of manifests/atlas_download_plan.json. Pipeline source: megadata-post normalize --dataset uniref90.

Downloads last month
16