File size: 2,134 Bytes
1671db1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: agpl-3.0
task_categories:
- text-generation
language:
- en
tags:
- lua
- code
- ygo
size_categories:
- 10K<n<100K
---

# YGOPro Lua Code Generation Dataset

## Dataset Description

This dataset contains Yu-Gi-Oh! card effects written with correct PSCT (Problem-Solving card text) paired with their corresponding YGOPro Lua script implementations. It's designed for training models to generate functional Lua code for YGOPro (Yu-Gi-Oh! Pro) simulator from natural language card effect descriptions.

## Dataset Structure

### Data Fields

- `instruction`: The task instruction (constant across all examples)
- `input`: Natural language description of the Yu-Gi-Oh! card effect  
- `output`: Corresponding YGOPro Lua script implementation

### Data Splits

- **Train**: 90% of availablle examples
- **Validation**: 10% of available examples

## Usage

### Loading the Dataset

```python
from datasets import load_dataset

dataset = load_dataset("{lenarc/psct_lua}")

# Access train split
train_data = dataset["train"]

# Access validation split  
val_data = dataset["validation"]
```

### Example Usage for Fine-tuning

```python
# For training with Unsloth/transformers
from datasets import load_dataset

dataset = load_dataset("lenarc/psct_lua")

# The dataset is ready to use for instruction-following model training
# Each example has: instruction, input (card effect), output (lua code)
```

## Dataset Creation

This dataset was created by collecting Yu-Gi-Oh! card effects and their corresponding YGOPro Lua implementations. The data has been formatted for instruction-following fine-tuning.

## Intended Use

- Fine-tuning language models for code generation
- Training models to convert natural language game rules to executable code
- Research in domain-specific code generation
- Educational purposes for learning Lua scripting for YGOPro

## License

This dataset is released under the GNU Affero General Public License v3.0 (AGPL-3.0), consistent with the YGOPro project licensing.

## Disclaimer

This dataset is for educational and research purposes. Yu-Gi-Oh! is a trademark of Konami Digital Entertainment.