The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
trace_id: string
metadata: struct<content_type: string, document_type: string, area_of_law: string, industry: list<item: string>, jurisdiction: list<item: string>, pii_redacted: bool, proprietary_status: list<item: string>, provenance_tier: string, tokens_estimate: int64, language: string>
data: struct<text: string>
vs
trace_id: string
metadata: struct<content_type: string, area_of_law: string, document_type: list<item: string>, industry: list<item: string>, jurisdiction: list<item: string>, pii_redacted: bool, proprietary_status: list<item: string>, provenance_tier: string, tokens_estimate: int64, language: string>
data: struct<text: string>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 246, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
for key, pa_table in ex_iterable.iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 554, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
trace_id: string
metadata: struct<content_type: string, document_type: string, area_of_law: string, industry: list<item: string>, jurisdiction: list<item: string>, pii_redacted: bool, proprietary_status: list<item: string>, provenance_tier: string, tokens_estimate: int64, language: string>
data: struct<text: string>
vs
trace_id: string
metadata: struct<content_type: string, area_of_law: string, document_type: list<item: string>, industry: list<item: string>, jurisdiction: list<item: string>, pii_redacted: bool, proprietary_status: list<item: string>, provenance_tier: string, tokens_estimate: int64, language: string>
data: struct<text: string>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Legal Data Corpus: Sample
This repository contains a sample of a high-quality, structured legal dataset curated by Legal Nodes. The corpus is designed for training and fine-tuning Large Language Models (LLMs) on complex legal reasoning, drafting, and regulatory compliance tasks.
The data spans multiple jurisdictions (UK, US, EU, UAE, Asia) and specialized areas of law, categorized by professional provenance tiers to ensure the highest standards of legal accuracy.
π‘οΈ Data Quality & Provenance Tiers
To ensure models learn from authoritative sources, every document is tagged with its provenance level:
- Tier 1 : Content provided by a Partner or an Expert-level source.
- Tier 2 : Associate-drafted content, reviewed by an Expert.
- Tier 3 : Procedural and administrative legal documents provided by In-house counsel.
ποΈ Taxonomy Breakdown
The corpus is organized into six primary categories, representing the lifecycle of legal work from raw research to final deliverables and consultation traces.
| Category | Description | Provenance Tier |
|---|---|---|
| 1. Expert Reasoning | Private memos, tax opinions, and regulatory analysis. | Tier 1 |
| 2. Final Documents | Final written contracts, corporate resolutions, and deliverables. | Tier 1 / 2 |
| 3. Instruction-Outcome | Request-to-Document chains and iterative drafting traces. | Tier 1 / 2 |
| 4. Procedural Artifacts | Filings, license applications (RFI responses), and procedural docs. | Tier 3 |
| 5. Legal Transcripts | Transcripts of consultations and client-lawyer communications. | Tier 1 / 2 |
| 6. Knowledge Bases | Summaries of legal articles, checklists, and statutory analysis. | Tier 1 / 2 |
π Jurisdictional Coverage
The dataset provides a global perspective on legal artifacts, with volumes distributed across:
- UK & US: Primary focus for common law structures.
- Europe (EU-level + DE, FR, IT, PT, UA): Civil law and EU regulatory frameworks (GDPR, AI Act, MiCA).
- UAE & Asia: Regional compliance and corporate artifacts.
βοΈ Areas of Law
The corpus covers critical domains for modern legal-AI applications:
- Corporate: M&A, Incorporations, Term Sheets, Investment Documents.
- Commercial: Contracts, Service Agreements, Public Documents.
- Financial Regulatory: AI Act, MiCA, DORA, Licensing.
- Privacy & Data: GDPR compliance and data processing agreements.
- Tax & Banking: Specialized opinions and banking regulatory filings.
π οΈ Usage
This dataset is intended for:
- Fine-tuning legal reasoning models.
- Training for automated contract drafting and review.
- Regulatory compliance mapping (e.g., mapping business processes to the AI Act or GDPR).
For inquiries regarding access to the full dataset or specific sub-sections, please contact the Legal Nodes data team. Contact email: max@legalnodes.com
- Downloads last month
- 13