File size: 7,500 Bytes
5ef4eef 602f496 5ef4eef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 |
---
license: mit
language:
- vi
tags:
- question-generation
- ag, t5, vit5, squad-format, vietnamese, education, nlp
pretty_name: Vietnamese Question Generation
size_categories:
- 10K<n<100K
---
# HVU_QA
**HVU_QA** is a project dedicated to sharing datasets and tools for **Question Generation Processing (NLP)**, developed and maintained by the research team at **Hung Vuong University (HVU), Phu Tho, Vietnam**.
This project is supported by **Hung Vuong University, Phu Tho, Vietnam**, with the aim of advancing research and applications in low-resource language processing, particularly for the Vietnamese language.
---
## 📚 Overview
This repository enables you to:
1. Fine-tune the [VietAI/vit5-base](https://huggingface.co/datasets/DANGDOCAO/GeneratingQuestions) model on your own GQ dataset.
2. Generate multiple, diverse questions given a user-provided text passage (context).
---
## 📁 Datasets
* Built following the **SQuAD v2.0 standard**, ensuring compatibility with NLP pipelines.
* Includes tens of thousands of high-quality **Question–Context–Answer triples (QCA)**.
* Suitable for both **training** and **evaluation**.
---
## 📁 Vietnamese Question Generation Tool
A **command-line tool** for:
* **Fine-tuning** a question generation model.
* **Automatically generating questions** from Vietnamese text.
Built on **Hugging Face Transformers (VietAI/vit5-base)** and **PyTorch**.
---
## Features
* Fine-tune a question generation model with SQuAD v2.0 format data.
* Generate diverse and creative questions from text passages.
* Flexible generation parameters (`top-k`, `top-p`, `temperature`, etc.).
* Simple command-line usage.
* GPU support if available.
---
## 📊 Evaluation Results
We conducted both **manual evaluation** (500 samples) and **automatic evaluation** (1,000 samples).
| Evaluation Type | Precision | Recall | F1-Score |
|------------------|-----------|--------|----------|
| Automatic (1000) | 0.85 | 0.83 | 0.84 |
| Manual (500) | 0.88 | 0.86 | 0.87 |
➡️ The model generates diverse, grammatically correct, and contextually appropriate questions.
---
## Creation Process
The dataset was built using a **4-stage automated pipeline**:
1. Select relevant QA websites from trusted sources.
2. Automatic crawling to collect raw QA pages.
3. Semantic tag extraction to obtain clean Question–Context–Answer triples.
4. AI-assisted filtering to remove noisy or inconsistent samples.
---
## 📝 Quality Evaluation
A fine-tuned model trained on **HVU_QA (VietAI/vit5-base)** achieved:
* **BLEU Score**: 90.61
* **Semantic similarity**: 97.0% (cosine ≥ 0.8)
* **Human evaluation**:
* Grammar: **4.58 / 5**
* Usefulness: **4.29 / 5**
➡️ These results confirm that **HVU_QA is a high-quality resource** for developing robust FAQ-style question generation models.
---
## 📂 Project Structure
```
.HVU_QA
├── t5-viet-qg-finetuned/
├── fine_tune_qg.py
├── generate_question.py
├── 30ktrain.json
└── README.md
```
> All data files are UTF-8 encoded and ready for use in NLP pipelines.
---
## 🛠️ Requirements
* Python 3.8+
* PyTorch >= 1.9
* Transformers >= 4.30
* scikit-learn
* Fine-tuned model (download at: [link](https://huggingface.co/datasets/DANGDOCAO/GeneratingQuestions/tree/main))
---
## ⚙️ Setup
### 🛠️ Step 1: Download and Extract
1. Download `HVU_QA.zip`
2. Extract into a folder, e.g.:
```
D:\your\HVU_QA
```
### 🛠️ Step 2: Add to Environment Path (if needed)
1. Open **System Properties → Environment Variables**
2. Select `Path` → **Edit** → **New**
3. Add the path, e.g.:
```
D:\your\HVU_QA
```
### 🛠️ Step 3: Open in Visual Studio Code
```
File > Open Folder > D:\HVU_QA
```
### 🛠️ Step 4: Install Required Libraries
Open **Terminal** and run:
#### Windows (PowerShell)
**Required only**
```powershell
python -m pip install --upgrade pip
pip install torch transformers datasets scikit-learn sentencepiece safetensors
```
**Required + Optional**
```powershell
python -m pip install --upgrade pip
pip install torch transformers datasets scikit-learn sentencepiece safetensors accelerate tensorboard evaluate sacrebleu rouge-score nltk
```
#### Linux / macOS (bash/zsh)
**Required only**
```bash
python3 -m pip install --upgrade pip
pip install torch transformers datasets scikit-learn sentencepiece safetensors
```
**Required + Optional**
```bash
python3 -m pip install --upgrade pip
pip install torch transformers datasets scikit-learn sentencepiece safetensors accelerate tensorboard evaluate sacrebleu rouge-score nltk
```
✅ Verify installation:
* Windows (PowerShell)
```powershell
python -c "import torch, transformers, datasets, sklearn, sentencepiece, safetensors, accelerate, tensorboard, evaluate, sacrebleu, rouge_score, nltk; print('✅ All dependencies installed correctly!')"
```
* Linux/macOS
```bash
python3 -c "import torch, transformers, datasets, sklearn, sentencepiece, safetensors, accelerate, tensorboard, evaluate, sacrebleu, rouge_score, nltk; print('✅ All dependencies installed correctly!')"
```
---
## Usage
* Train and evaluate a question generation model.
* Develop Vietnamese NLP tools.
* Conduct linguistic research.
### Training (Fine-tuning)
When you run `fine_tune_qg.py`, the script will:
1. Load the dataset from **`30ktrain.json`**
2. Fine-tune the `VietAI/vit5-base` model
3. Save the trained model into a new folder named **`t5-viet-qg-finetuned/`**
Run:
```bash
python fine_tune_qg.py
```
### Generating Questions
```bash
python generate_question.py
```
**Example:**
```
Input passage:
Iced milk coffee (Cà phê sữa đá) is a famous drink in Vietnam.
Number of questions: 5
```
✅ Output:
1. What type of coffee is famous in Vietnam?
2. Why is iced milk coffee popular?
3. What ingredients are included in iced milk coffee?
4. Where does iced milk coffee originate from?
5. How is Vietnamese iced milk coffee prepared?
---
## ⚙️ Generation Settings
In `generate_question.py`, you can adjust:
* `top_k`, `top_p`, `temperature`, `no_repeat_ngram_size`, `repetition_penalty`
---
## 🤝 Contribution
We welcome contributions:
* Open issues
* Submit pull requests
* Suggest improvements or add datasets
---
## 📄 Citation
If you use this repository or datasets in research, please cite:
**Ha Nguyen-Tien, Phuc Le-Hong, Dang Do-Cao, Cuong Nguyen-Hung, Chung Mai-Van. 2025. A Method to Build QA Corpora for Low-Resource Languages. Proceedings of KSE 2025. ACM TALLIP.**
### 📚 BibTeX
```bibtex
@inproceedings{nguyen2025hvuqa,
title={A Method to Build QA Corpora for Low-Resource Languages},
author={Ha Nguyen-Tien and Phuc Le-Hong and Dang Do-Cao and Cuong Nguyen-Hung and Chung Mai-Van},
booktitle={Proceedings of KSE 2025},
year={2025}
}
```
---
## 📬 Contact
* **Ha Nguyen-Tien** (Corresponding author)
📧 [nguyentienha@hvu.edu.vn](mailto:nguyentienha@hvu.edu.vn)
* **Phuc Le-Hong**
📧 [Lehongphuc20021408@gmail.com](mailto:Lehongphuc20021408@gmail.com)
* **Dang Do-Cao**
📧 [docaodang532001@gmail.com](mailto:docaodang532001@gmail.com)
📍 Faculty of Engineering and Technology, Hung Vuong University, Phu Tho, Vietnam
🌐 [https://hvu.edu.vn](https://hvu.edu.vn)
---
*This repository is part of our ongoing effort to support Vietnamese NLP and make language technology more accessible for low-resource and underrepresented languages.* |