File size: 3,737 Bytes
2ca0401
0dd81e3
82b5f79
 
 
 
 
 
 
 
 
 
 
7c861fb
82b5f79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ca0401
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5598106
2ca0401
 
5598106
2ca0401
 
 
 
 
5598106
2ca0401
 
 
 
 
 
 
 
 
b685842
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: cc-by-nc-4.0
configs:
- config_name: casename_classification
  data_files:
  - split: train
    path: casename_classification/train.jsonl
  - split: valid
    path: casename_classification/valid.jsonl
  - split: test
    path: casename_classification/test.jsonl
  - split: test2
    path: casename_classification/test2.jsonl
- config_name: casename_classification_plus
  data_files:
  - split: train
    path: casename_classification_plus/train.jsonl
  - split: valid
    path: casename_classification_plus/valid.jsonl
  - split: test
    path: casename_classification_plus/test.jsonl
- config_name: statute_classification
  data_files:
  - split: train
    path: statute_classification/train.jsonl
  - split: valid
    path: statute_classification/valid.jsonl
  - split: test
    path: statute_classification/test.jsonl
  - split: test2
    path: statute_classification/test2.jsonl
- config_name: statute_classification_plus
  data_files:
  - split: train
    path: statute_classification_plus/train.jsonl
  - split: valid
    path: statute_classification_plus/valid.jsonl
  - split: test
    path: statute_classification_plus/test.jsonl
- config_name: summarization
  data_files:
  - split: train
    path: summarization/train.jsonl
  - split: valid
    path: summarization/valid.jsonl
  - split: test
    path: summarization/test.jsonl
- config_name: summarization_plus
  data_files:
  - split: train
    path: summarization_plus/train.jsonl
  - split: valid
    path: summarization_plus/valid.jsonl
  - split: test
    path: summarization_plus/test.jsonl
- config_name: ljp_civil
  data_files:
  - split: train
    path: legal_judgement_prediction/civil/train.jsonl
  - split: valid
    path: legal_judgement_prediction/civil/valid.jsonl
  - split: test
    path: legal_judgement_prediction/civil/test.jsonl
  - split: test2
    path: legal_judgement_prediction/civil/test2.jsonl
- config_name: ljp_criminal
  data_files:
  - split: train
    path: legal_judgement_prediction/criminal/train.jsonl
  - split: valid
    path: legal_judgement_prediction/criminal/valid.jsonl
  - split: test
    path: legal_judgement_prediction/criminal/test.jsonl
  - split: test2
    path: legal_judgement_prediction/criminal/test2.jsonl
- config_name: precedent_corpus
  data_files:
  - split: train
    path: precedent_corpus/train.jsonl

---
# Dataset Card for `lbox_open`
## Dataset Description
- **Homepage:** `https://lbox.kr`
- **Repository:** `https://github.com/lbox-kr/lbox_open`
- **Point of Contact:** [Wonseok Hwang](mailto:wonseok.hwang@lbox.kr)

### Dataset Summary

A Legal AI Benchmark Dataset from Korean Legal Cases.

### Languages

Korean

### How to use
```python
from datasets import load_dataset
# casename classficiation task
data_cn = load_dataset("lbox/lbox_open", "casename_classification")
data_cn_plus = load_dataset("lbox/lbox_open", "casename_classification_plus")
# statutes classification task
data_st = load_dataset("lbox/lbox_open", "statute_classification")
data_st_plus = load_dataset("lbox/lbox_open", "statute_classification_plus")
# Legal judgement prediction tasks
data_ljp_criminal = load_dataset("lbox/lbox_open", "ljp_criminal")
data_ljp_civil = load_dataset("lbox/lbox_open", "ljp_civil")
# case summarization task
data_summ = load_dataset("lbox/lbox_open", "summarization")
data_summ_plus = load_dataset("lbox/lbox_open", "summarization_plus")
# precedent corpus
data_corpus = load_dataset("lbox/lbox_open", "precedent_corpus")
```

For more information about the dataset, please visit <https://github.com/lbox-kr/lbox_open>.

## Licensing Information
Copyright 2022-present [LBox Co. Ltd.](https://lbox.kr/)

Licensed under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)