File size: 3,234 Bytes
b37ac75 2197083 7780139 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 2197083 b37ac75 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-classification
paperswithcode_id: null
pretty_name: Text360 Sample Dataset
tags:
- text-classification
- arxiv
- wikipedia
dataset_info:
data_files:
train:
- dir1/subdir1/s1.jsonl
- dir2/subdir2/s2.jsonl
config_name: default
---
# Dataset Card for Text360 Sample Dataset
## Dataset Description
- **Repository:** [Add your repository URL here]
- **Paper:** [Add paper URL if applicable]
- **Point of Contact:** [Add contact information]
### Dataset Summary
This dataset contains text samples from two sources (arXiv and Wikipedia) organized in a hierarchical directory structure. Each sample includes a text field and a subset identifier.
### Data Files Structure
The dataset maintains its original directory structure:
```
.
├── dir1/
│ └── subdir1/
│ └── sample1.jsonl # Contains arXiv samples
└── dir2/
└── subdir2/
└── sample2.jsonl # Contains Wikipedia samples
```
### Data Fields
Each JSONL file contains records with the following fields:
- `text`: string - The main text content
- `subset`: string - Source identifier ("arxiv" or "wikipedia")
### Data Splits
All data is included in the train split, distributed across the JSONL files in their respective directories.
### Example Instance
```json
{
"text": "This is a long text sample from arxiv about quantum computing...",
"subset": "arxiv"
}
```
## Additional Information
### Dataset Creation
The dataset is organized in its original directory structure, with JSONL files containing text samples from arXiv and Wikipedia sources. Each file maintains its original location and format.
### Curation Rationale
The dataset was created to provide a sample of text data from different sources for text classification tasks.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from two sources:
1. arXiv papers
2. Wikipedia articles
#### Who are the source language producers?
- arXiv: Academic researchers and scientists
- Wikipedia: Community contributors
### Annotations
#### Annotation process
No additional annotations were added to the source data.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset can be used for educational and research purposes in text classification tasks.
### Discussion of Biases
The dataset may contain biases inherent to the source materials (arXiv papers and Wikipedia articles).
### Other Known Limitations
The dataset is a small sample and may not be representative of all content from the source materials.
### Dataset Curators
[Add curator information]
### Licensing Information
This dataset is released under the MIT License.
### Citation Information
[Add citation information]
### Contributions
[Add contribution information]
### Contact
[Add contact information] |