Text2Tech / README.md
amsa02's picture
Update README.md
6931ebb verified
metadata
configs:
  - config_name: ner
    data_files: ner.parquet
    default: true
  - config_name: el
    data_files: el.parquet
  - config_name: re
    data_files: re.parquet
annotations_creators:
  - expert-generated
language_creators:
  - found
language:
  - en
license:
  - unknown
multilinguality:
  - monolingual
size_categories:
  - 100<n<1K
source_datasets:
  - original
task_categories:
  - text-classification
  - token-classification
task_ids:
  - named-entity-recognition
  - entity-linking-classification
  - multi-class-classification
pretty_name: Text2Tech Curated Documents
tags:
  - structure-prediction
  - technology
  - relation extraction
  - entity linking
  - named entity recognition
dataset_info:
  - config_name: ner
    features:
      - name: docid
        dtype: string
      - name: tokens
        dtype: string
      - name: ner_tags
        dtype: string
    splits:
      - name: train
        num_bytes: 917085
        num_examples: 135
    download_size: 190248
    dataset_size: 917085
  - config_name: el
    features:
      - name: docid
        dtype: string
      - name: tokens
        dtype: string
      - name: ner_tags
        dtype: string
      - name: entity_mentions
        dtype: string
    splits:
      - name: train
        num_bytes: 1807601
        num_examples: 135
    download_size: 345738
    dataset_size: 1807601
  - config_name: re
    features:
      - name: docid
        dtype: string
      - name: tokens
        dtype: string
      - name: ner_tags
        dtype: string
      - name: relations
        dtype: string
    splits:
      - name: train
        num_bytes: 1095051
        num_examples: 135
    download_size: 210872
    dataset_size: 1095051

Dataset Card for Text2Tech Curated Documents

Dataset Summary

This dataset is the result of converting a UIMA CAS 0.4 JSON export from the Inception annotation tool into a simplified format suitable for Natural Language Processing tasks. Specifically, it provides configurations for Named Entity Recognition (NER), Entity Linking (EL), and Relation Extraction (RE).

The conversion process utilized the dkpro-cassis library to load the original annotations and spaCy for tokenization. The final dataset is structured similarly to the DFKI-SLT/mobie dataset to ensure compatibility.

This version of the dataset loader provides configurations for:

  • Named Entity Recognition (ner): NER tags use spaCy's BILUO tagging scheme.
  • Entity Linking (el): Entity mentions are linked to external knowledge bases.
  • Relation Extraction (re): Relations between entities are annotated.

Supported Tasks and Leaderboards

  • Tasks: Named Entity Recognition, Entity Linking, Relation Extraction
  • Leaderboards: More Information Needed

Languages

The text in the dataset is in English.

Dataset Structure

Data Instances

ner

An example of 'train' looks as follows.

{
  "docid": "138",
  "tokens": [
    "\"",
    "Samsung",
    "takes",
    "aim",
    "at",
    "blood",
    "pressure",
    "monitoring",
    "with",
    "the",
    "Galaxy",
    "Watch",
    "Active",
    "..."
  ],
  "ner_tags": [
    0,
    1,
    0,
    0,
    0,
    2,
    3,
    4,
    0,
    0,
    5,
    6,
    7,
    "..."
  ]
}

el

An example of 'train' looks as follows.

{
  "docid": "138",
  "tokens": [
      "\"",
      "Samsung",
      "takes",
      "aim",
      "at",
      "blood",
      "pressure",
      "monitoring",
      "with",
      "the",
      "Galaxy",
      "Watch",
      "Active",
       "..."
  ],
    "ner_tags": [
      0,
      1,
      0,
      0,
      0,
      2,
      3,
      4,
      0,
      0,
      5,
      6,
      7,
      "..."
],

  "entity_mentions": [
    {
      "text": "Samsung",
      "start": 1,
      "end": 2,
      "char_start": 1,
      "char_end": 8,
      "type": 0,
      "entity_id": "http://www.wikidata.org/entity/Q124989916"
    },
    "..."
  ]
}

re

An example of 'train' looks as follows.

{
  "docid": "138",
  "tokens": [
    "\"",
    "Samsung",
    "takes",
    "aim",
    "at",
    "blood",
    "pressure",
    "monitoring",
    "with",
    "the",
    "Galaxy",
    "Watch",
    "Active",
    "..."
  ],
  "ner_tags": [
    0,
    1,
    0,
    0,
    0,
    2,
    3,
    4,
    0,
    0,
    5,
    6,
    7,
    "..."
  ],
  "relations": [
    {
      "id": "138-0",
      "head_start": 706,
      "head_end": 708,
      "head_type": 2,
      "tail_start": 706,
      "tail_end": 708,
      "tail_type": 2,
      "type": 0
    },
    "..."
  ]
}

Data Fields

ner

  • docid: A string feature representing the document identifier.
  • tokens: A list of string features representing the tokens in the document.
  • ner_tags: A list of classification labels using spaCy's BILUO tagging scheme. The mapping from ID to tag is as follows:

BILUO Tagging Scheme:

  • B- (Begin): First token of a multi-token entity
  • I- (Inside): Inner tokens of a multi-token entity
  • L- (Last): Final token of a multi-token entity
  • U- (Unit): Single token entity
  • O (Outside): Non-entity token
{
  "O": 0,
  "U-Organization": 1,
  "B-Method": 2,
  "I-Method": 3,
  "L-Method": 4,
  "B-Technological System": 5,
  "I-Technological System": 6,
  "L-Technological System": 7,
  "U-Technological System": 8,
  "U-Method": 9,
  "B-Material": 10,
  "L-Material": 11,
  "I-Material": 12,
  "B-Organization": 13,
  "L-Organization": 14,
  "I-Organization": 15,
  "U-Material": 16,
  "B-Technical Field": 17,
  "L-Technical Field": 18,
  "I-Technical Field": 19,
  "U-Technical Field": 20
}

el

  • docid: A string feature representing the document identifier.
  • tokens: A list of string features representing the tokens in the document.
  • entity_mentions: A list of struct features containing:
    • text: a string feature.
    • start: token offset start, a int32 feature.
    • end: token offset end, a int32 feature.
    • char_start: character offset start, a int32 feature.
    • char_end: character offset end, a int32 feature.
    • type: a classification label. The mapping from ID to entity type is as follows:
{
  "Organization": 0,
  "Method": 1,
  "Technological System": 2,
  "Material": 3,
  "Technical Field": 4
}
  • entity_id: a string feature representing the entity identifier from a knowledge base.

re

  • docid: A string feature representing the document identifier.
  • tokens: A list of string features representing the tokens in the document.
  • ner_tags: A list of classification labels, corresponding to the NER task.
  • relations: A list of struct features containing:
    • id: a string feature representing the relation identifier.
    • head_start: token offset start of the head entity, an int32 feature.
    • head_end: token offset end of the head entity, an int32 feature.
    • head_type: a classification label for the head entity type.
    • tail_start: token offset start of the tail entity, an int32 feature.
    • tail_end: token offset end of the tail entity, an int32 feature.
    • tail_type: a classification label for the tail entity type.
    • type: a classification label for the relation type. The mapping from ID to relation type is as follows:
{
  "ts:executes": 0,
  "org:develops_or_provides": 1,
  "ts:contains": 2,
  "ts:made_of": 3,
  "ts:uses": 4,
  "ts:supports": 5,
  "met:employs": 6,
  "met:processes": 7,
  "mat:transformed_to": 8,
  "org:collaborates": 9,
  "met:creates": 10,
  "met:applied_to": 11,
  "ts:processes": 12
}

Data Splits

Please add information about your data splits here. For example:

  • train: X samples
  • validation: Y samples
  • test: Z samples

Dataset Creation

The dataset was created by converting JSON files exported from the Inception annotation tool. The inception_converter.py script was used to process these files. This script uses the dkpro-cassis library to load the UIMA CAS JSON data and spaCy for tokenization and creating BIO tags for the NER task. The data was then split into three separate files for NER, EL, and RE tasks.

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

Amir Safari

Licensing Information

Please specify the license for this dataset.

Citation Information

Please provide a BibTeX citation for your dataset.

  author    = {Amir Safari},
  title     = {Text2Tech Curated Documents},
  year      = {2025},
  publisher = {Hugging Face}
}