Update README.md
Browse files
README.md
CHANGED
@@ -1,635 +1,345 @@
|
|
1 |
-
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
\usepackage{xcolor}
|
10 |
-
\usepackage{booktabs}
|
11 |
-
\usepackage{multirow}
|
12 |
-
\usepackage{url}
|
13 |
-
\usepackage[utf8]{inputenc}
|
14 |
-
\usepackage{tikz}
|
15 |
-
\usetikzlibrary{shapes,arrows,positioning}
|
16 |
|
17 |
-
|
18 |
-
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
-
\IEEEauthorblockA{\textit{Department of Computer Science} \\
|
26 |
-
\textit{Al Hussein Technical University}\\
|
27 |
-
Amman, Jordan \\
|
28 |
-
21110448@htu.edu.jo}
|
29 |
-
\and
|
30 |
-
\IEEEauthorblockN{Bassam Alkasasbeh}
|
31 |
-
\IEEEauthorblockA{\textit{Department of Computer Science} \\
|
32 |
-
\textit{Al Hussein Technical University}\\
|
33 |
-
Amman, Jordan \\
|
34 |
-
bassam.alkasasbeh@htu.edu.jo}}
|
35 |
-
|
36 |
-
\maketitle
|
37 |
-
|
38 |
-
\begin{abstract}
|
39 |
-
Large Language Models (LLMs) have demonstrated remarkable capabilities across various natural language processing tasks. However, the development of high-quality Arabic LLMs has been hindered by the scarcity of large-scale, clean Arabic datasets. We present ArabicText-Large, a comprehensive dataset comprising 743,288 high-quality Arabic articles with over 244 million words, specifically curated for LLM training. Our dataset addresses the critical gap in Arabic NLP resources through a sophisticated multi-stage processing pipeline that includes advanced cleaning, quality filtering, and validation mechanisms. We employ comprehensive preprocessing techniques to remove Wikipedia artifacts, normalize text, and ensure linguistic quality. Statistical analysis reveals strong content diversity across multiple domains, with an average article length of 328 words and vocabulary richness of 1.5 million unique terms. Comparative analysis against existing Arabic datasets demonstrates superior quality metrics and extensive coverage. ArabicText-Large represents one of the largest publicly available Arabic text corpora for LLM training and is released under an open license to advance Arabic NLP research.
|
40 |
-
\end{abstract}
|
41 |
-
|
42 |
-
\begin{IEEEkeywords}
|
43 |
-
Arabic NLP, Large Language Models, Dataset Curation, Text Preprocessing, Corpus Linguistics, Natural Language Processing
|
44 |
-
\end{IEEEkeywords}
|
45 |
-
|
46 |
-
\section{Introduction}
|
47 |
-
|
48 |
-
The rapid advancement of Large Language Models (LLMs) has revolutionized natural language processing, enabling unprecedented capabilities in text generation, understanding, and reasoning \cite{brown2020language, chowdhery2022palm}. However, these breakthroughs have predominantly benefited high-resource languages, particularly English, while low-resource languages like Arabic continue to face significant challenges due to limited availability of large-scale, high-quality training corpora \cite{abdelali2024arabic}.
|
49 |
-
|
50 |
-
Arabic, spoken by over 400 million people worldwide and ranking as the fifth most spoken language globally, presents unique linguistic challenges for NLP applications. The morphological richness, right-to-left script, diacritical marks, and dialectal variations make Arabic text processing particularly complex \cite{habash2010introduction}. Despite its global significance, Arabic remains underrepresented in the development of state-of-the-art language models, primarily due to the scarcity of comprehensive, clean, and well-structured datasets.
|
51 |
-
|
52 |
-
Recent surveys on Arabic NLP datasets reveal a critical gap: while datasets exist for specific tasks such as sentiment analysis, named entity recognition, and question answering \cite{antoun2020arabert}, there is a notable absence of large-scale, general-purpose text corpora suitable for pre-training modern LLMs \cite{fanar2024review}. Existing Arabic corpora often suffer from quality issues including incomplete preprocessing, presence of non-Arabic text, formatting artifacts, and insufficient scale for contemporary deep learning models.
|
53 |
-
|
54 |
-
This paper presents ArabicText-Large, a meticulously curated dataset developed at Al Hussein Technical University to address these limitations. Our contributions are threefold:
|
55 |
-
|
56 |
-
\begin{itemize}
|
57 |
-
\item We present a large-scale, high-quality Arabic corpus comprising 743,288 articles with 244 million words, representing one of the largest publicly available Arabic text collections for LLM training.
|
58 |
-
|
59 |
-
\item We introduce a comprehensive multi-stage processing pipeline that employs advanced cleaning techniques, quality filtering mechanisms, and validation protocols to ensure dataset integrity and linguistic quality.
|
60 |
-
|
61 |
-
\item We provide detailed statistical analysis, quality assessment metrics, and comparative evaluation against existing Arabic datasets, demonstrating the superior characteristics of our corpus.
|
62 |
-
\end{itemize}
|
63 |
-
|
64 |
-
The remainder of this paper is organized as follows: Section II reviews related work on Arabic datasets and preprocessing methods. Section III describes our data collection and processing methodology. Section IV presents comprehensive statistical analysis and quality metrics. Section V compares our dataset with existing Arabic corpora. Section VI discusses applications and future work, and Section VII concludes the paper.
|
65 |
-
|
66 |
-
\section{Related Work}
|
67 |
-
|
68 |
-
\subsection{Arabic NLP Datasets}
|
69 |
-
|
70 |
-
The landscape of Arabic NLP has evolved significantly over the past decade, with several notable dataset contributions. The Arabic Gigaword corpus \cite{parker2011arabic} represented an early milestone, containing over 848 million words from newswire sources. However, its focus on formal news text limits its applicability for general-purpose language modeling.
|
71 |
-
|
72 |
-
More recently, the OSCAR corpus \cite{ortiz2019asynchronous} included Arabic text as part of its multilingual collection, comprising approximately 22 billion words. While substantial in size, OSCAR's automated collection process results in varying quality levels and significant noise requiring extensive post-processing.
|
73 |
-
|
74 |
-
The CC-100 dataset \cite{conneau2020unsupervised} provides monolingual data for 100+ languages, including Arabic, with approximately 17 billion words. Similarly, mC4 \cite{xue2021mt5} offers Arabic text extracted from Common Crawl, containing around 42 billion words. Despite their scale, both datasets exhibit quality concerns including code-mixing, transliteration inconsistencies, and webpage artifacts.
|
75 |
-
|
76 |
-
Recent specialized efforts include AraSpider \cite{alyafeai2020araspider} for question answering (200K samples), AraBERT pre-training corpus \cite{antoun2020arabert} (70M sentences), and the 101 Billion Arabic Words Dataset \cite{alkhamissi2024101billion} which represents the largest collection to date but faces accessibility and quality standardization challenges.
|
77 |
-
|
78 |
-
\subsection{Arabic Text Preprocessing Methods}
|
79 |
-
|
80 |
-
Arabic text preprocessing requires specialized techniques due to the language's unique characteristics. Habash \cite{habash2010introduction} identified key challenges including orthographic inconsistency, morphological complexity, and diacritization variability.
|
81 |
-
|
82 |
-
Recent preprocessing approaches employ multi-stage pipelines. The TNKEEH library \cite{tnkeeh2021} provides normalization and cleaning tools specifically designed for Arabic. Research by Alyafeai et al. \cite{alyafeai2021preprocessing} demonstrated that proper preprocessing can improve downstream task performance by 15-20\%.
|
83 |
-
|
84 |
-
Studies on Arabic social media text preprocessing \cite{almiman2020preprocessing, alharbi2021preprocessing} revealed that effective cleaning includes removing diacritics, normalizing Arabic letters, handling elongation, and removing non-Arabic characters. However, these techniques must be carefully balanced to preserve linguistic information crucial for language modeling.
|
85 |
-
|
86 |
-
\subsection{Quality Assessment Metrics}
|
87 |
-
|
88 |
-
Dataset quality assessment for Arabic NLP typically considers multiple dimensions: linguistic purity (Arabic character ratio), content diversity (topic distribution), text coherence (sentence structure), and vocabulary richness (unique word ratio) \cite{alkhamissi2024dataset}.
|
89 |
-
|
90 |
-
The AlGhafa benchmark \cite{alghafa2023} and ABBL evaluation framework \cite{abbl2024} provide standardized metrics for Arabic dataset quality, considering factors such as dialectal coverage, domain distribution, and text complexity. These frameworks inform our quality assessment methodology.
|
91 |
-
|
92 |
-
\section{Methodology}
|
93 |
-
|
94 |
-
\subsection{Data Collection}
|
95 |
-
|
96 |
-
Our data collection process targeted high-quality Arabic content from reliable, peer-reviewed sources. We employed a systematic scraping methodology focusing on encyclopedic content that undergoes community review and editorial oversight, ensuring factual accuracy and linguistic quality across diverse topics.
|
97 |
-
|
98 |
-
The collection process involved:
|
99 |
-
\begin{enumerate}
|
100 |
-
\item \textbf{Source Selection}: We selected sources known for comprehensive coverage, quality control mechanisms, and regular content updates by expert contributors. Priority was given to platforms with established editorial standards and multilingual support.
|
101 |
-
\item \textbf{Article Retrieval}: We systematically extracted articles across multiple domains including science, history, geography, culture, and biography, ensuring broad topic coverage.
|
102 |
-
\item \textbf{Format Preservation}: Raw content was converted to structured text while preserving paragraph boundaries and semantic structure, maintaining document organization.
|
103 |
-
\item \textbf{Metadata Extraction}: We captured article identifiers, titles, source URLs, and timestamps for traceability and provenance tracking.
|
104 |
-
\end{enumerate}
|
105 |
-
|
106 |
-
The initial collection yielded 1,161,600 articles totaling approximately 5.35 GB of raw data before processing.
|
107 |
-
|
108 |
-
\subsection{Text Preprocessing Pipeline}
|
109 |
-
|
110 |
-
We developed a comprehensive multi-stage preprocessing pipeline to transform raw encyclopedic content into high-quality training data suitable for LLM training. The complete workflow is illustrated in Figure \ref{fig:pipeline}.
|
111 |
-
|
112 |
-
\subsubsection{Stage 1: Structural Artifact Removal}
|
113 |
-
|
114 |
-
Encyclopedic web content contains numerous structural elements unsuitable for language model training. Our cleaning process removes:
|
115 |
-
|
116 |
-
\begin{itemize}
|
117 |
-
\item \textbf{Reference Markers}: Numerical citations [1], [2], [1-5], etc.
|
118 |
-
\item \textbf{Template Structures}: Infoboxes, navigation boxes, and template syntax
|
119 |
-
\item \textbf{Metadata Elements}: Edit history, version notices, category tags
|
120 |
-
\item \textbf{Navigation Components}: "See also", "References", "External links" sections
|
121 |
-
\item \textbf{Media Elements}: Image captions, file references, gallery markup
|
122 |
-
\item \textbf{Coordinate Data}: Geographic coordinates and mapping information
|
123 |
-
\end{itemize}
|
124 |
-
|
125 |
-
We employ comprehensive regex patterns to identify and remove these artifacts. Table \ref{tab:patterns} summarizes the key pattern categories.
|
126 |
-
|
127 |
-
\begin{table}[htbp]
|
128 |
-
\caption{Structural Artifact Removal Patterns}
|
129 |
-
\label{tab:patterns}
|
130 |
-
\centering
|
131 |
-
\begin{tabular}{@{}ll@{}}
|
132 |
-
\toprule
|
133 |
-
\textbf{Category} & \textbf{Pattern Examples} \\ \midrule
|
134 |
-
References & \texttt{[\textbackslash d+]}, \texttt{[\textbackslash d,\textbackslash s]+} \\
|
135 |
-
Templates & \texttt{\{\{.*?\}\}}, \texttt{ู
ุนููู
ุงุช} \\
|
136 |
-
Navigation & \texttt{ุงูุธุฑ ุฃูุถุง}, \texttt{ู
ุฑุงุฌุน} \\
|
137 |
-
Media & \texttt{ู
ูู:}, \texttt{File:}, \texttt{ุตูุฑุฉ:} \\
|
138 |
-
Links & \texttt{https?://}, \texttt{www.} \\
|
139 |
-
\bottomrule
|
140 |
-
\end{tabular}
|
141 |
-
\end{table}
|
142 |
-
|
143 |
-
\subsubsection{Stage 2: Arabic Text Normalization}
|
144 |
-
|
145 |
-
Arabic-specific normalization ensures consistency and removes non-linguistic artifacts:
|
146 |
-
|
147 |
-
\begin{enumerate}
|
148 |
-
\item \textbf{Diacritic Removal}: All Arabic diacritical marks are removed as they are rarely used consistently and can introduce noise.
|
149 |
-
|
150 |
-
\item \textbf{Character Normalization}: Alef variants normalized, Hamza standardized when appropriate.
|
151 |
-
|
152 |
-
\item \textbf{Punctuation Standardization}: Arabic punctuation preserved while removing duplicate marks.
|
153 |
-
|
154 |
-
\item \textbf{Whitespace Normalization}: Multiple spaces reduced to single space, multiple line breaks condensed to double breaks for paragraph separation.
|
155 |
-
\end{enumerate}
|
156 |
-
|
157 |
-
\subsubsection{Stage 3: Quality Filtering}
|
158 |
-
|
159 |
-
We implement strict quality criteria to ensure only high-quality articles are retained:
|
160 |
-
|
161 |
-
\begin{enumerate}
|
162 |
-
\item \textbf{Length Filtering}:
|
163 |
-
\begin{itemize}
|
164 |
-
\item Minimum: 100 characters (removes stubs)
|
165 |
-
\item Maximum: 50,000 characters (removes lists/tables)
|
166 |
-
\end{itemize}
|
167 |
-
|
168 |
-
\item \textbf{Arabic Content Ratio}: Articles must contain $\geq$70\% Arabic characters to ensure linguistic purity.
|
169 |
-
|
170 |
-
\item \textbf{Sentence Structure}: Minimum 3 sentences required to ensure substantive content.
|
171 |
-
|
172 |
-
\item \textbf{Stub Detection}: Articles containing stub indicators with length $<$200 words are removed.
|
173 |
-
\end{enumerate}
|
174 |
-
|
175 |
-
\subsubsection{Stage 4: Content Quality Assessment}
|
176 |
-
|
177 |
-
Each article undergoes multi-dimensional quality scoring based on:
|
178 |
-
|
179 |
-
\begin{enumerate}
|
180 |
-
\item \textbf{Structural Quality}: Paragraph count, sentence variety, formatting consistency
|
181 |
-
\item \textbf{Linguistic Quality}: Vocabulary richness, word diversity, sentence complexity
|
182 |
-
\item \textbf{Information Density}: Unique word ratio, content-to-noise ratio
|
183 |
-
\item \textbf{Coherence}: Title-text relevance, topical consistency
|
184 |
-
\end{enumerate}
|
185 |
-
|
186 |
-
Articles scoring below 40\% on the combined quality metric are excluded. The scoring formula is:
|
187 |
-
|
188 |
-
\begin{equation}
|
189 |
-
Q_{score} = 0.25S + 0.30L + 0.25I + 0.20C
|
190 |
-
\end{equation}
|
191 |
-
|
192 |
-
where $S$, $L$, $I$, $C$ represent structural, linguistic, information, and coherence scores respectively.
|
193 |
-
|
194 |
-
\begin{figure}[htbp]
|
195 |
-
\centering
|
196 |
-
\begin{tikzpicture}[
|
197 |
-
node distance=1.2cm,
|
198 |
-
box/.style={rectangle, draw, fill=blue!20, text width=3cm, text centered, rounded corners, minimum height=0.8cm, font=\footnotesize},
|
199 |
-
arrow/.style={->, >=stealth, thick}
|
200 |
-
]
|
201 |
-
|
202 |
-
\node[box] (collect) {Data Collection\\1,161,600 articles};
|
203 |
-
\node[box, below of=collect] (artifact) {Artifact Removal};
|
204 |
-
\node[box, below of=artifact] (normalize) {Text Normalization};
|
205 |
-
\node[box, below of=normalize] (filter) {Quality Filtering};
|
206 |
-
\node[box, below of=filter] (assess) {Quality Assessment};
|
207 |
-
\node[box, below of=assess] (dedup) {Deduplication};
|
208 |
-
\node[box, fill=green!20, below of=dedup] (final) {Final Dataset\\743,288 articles};
|
209 |
-
|
210 |
-
\draw[arrow] (collect) -- (artifact);
|
211 |
-
\draw[arrow] (artifact) -- (normalize);
|
212 |
-
\draw[arrow] (normalize) -- (filter);
|
213 |
-
\draw[arrow] (filter) -- node[right, font=\tiny] {-36\%} (assess);
|
214 |
-
\draw[arrow] (assess) -- node[right, font=\tiny] {Q$\geq$40\%} (dedup);
|
215 |
-
\draw[arrow] (dedup) -- (final);
|
216 |
-
|
217 |
-
\end{tikzpicture}
|
218 |
-
\caption{Multi-stage Data Processing Pipeline}
|
219 |
-
\label{fig:pipeline}
|
220 |
-
\end{figure}
|
221 |
-
|
222 |
-
\subsection{Deduplication and Validation}
|
223 |
-
|
224 |
-
To ensure corpus uniqueness:
|
225 |
-
|
226 |
-
\begin{enumerate}
|
227 |
-
\item \textbf{Exact Deduplication}: Identical articles removed using hash-based comparison
|
228 |
-
\item \textbf{Near-Duplicate Detection}: MinHash LSH algorithm identifies similar articles ($>$95\% similarity)
|
229 |
-
\item \textbf{Format Validation}: JSONL structure verified, UTF-8 encoding confirmed
|
230 |
-
\item \textbf{Statistical Validation}: Outlier detection for suspicious patterns
|
231 |
-
\end{enumerate}
|
232 |
-
|
233 |
-
\subsection{Dataset Structuring}
|
234 |
-
|
235 |
-
The final dataset is structured in JSONL format with the following schema:
|
236 |
-
|
237 |
-
\begin{verbatim}
|
238 |
{
|
239 |
-
"id": "
|
240 |
-
"title": "
|
241 |
-
"text": "
|
242 |
"url": "source_url",
|
243 |
"metadata": {
|
244 |
"language": "ar",
|
245 |
-
"source": "
|
246 |
-
"
|
247 |
-
"
|
|
|
248 |
}
|
249 |
}
|
250 |
-
|
251 |
-
|
252 |
-
This structure ensures compatibility with popular NLP frameworks including Hugging Face Datasets, TensorFlow, and PyTorch.
|
253 |
-
|
254 |
-
\section{Dataset Statistics and Analysis}
|
255 |
-
|
256 |
-
\subsection{Corpus Overview}
|
257 |
-
|
258 |
-
Table \ref{tab:statistics} presents comprehensive statistics for ArabicText-Large after all processing stages.
|
259 |
-
|
260 |
-
\begin{table}[htbp]
|
261 |
-
\caption{ArabicText-Large Corpus Statistics}
|
262 |
-
\label{tab:statistics}
|
263 |
-
\centering
|
264 |
-
\begin{tabular}{@{}lr@{}}
|
265 |
-
\toprule
|
266 |
-
\textbf{Metric} & \textbf{Value} \\ \midrule
|
267 |
-
Total Articles & 743,288 \\
|
268 |
-
Total Words & 244,153,780 \\
|
269 |
-
Total Sentences & 12,392,064 \\
|
270 |
-
Unique Words & 1,529,064 \\
|
271 |
-
Total Characters & 1,438,906,512 \\
|
272 |
-
Average Words/Article & 328.5 \\
|
273 |
-
Average Sentences/Article & 16.7 \\
|
274 |
-
Average Words/Sentence & 19.7 \\
|
275 |
-
Vocabulary Richness & 0.0063 \\
|
276 |
-
Dataset Size (compressed) & 2.8 GB \\
|
277 |
-
Arabic Content Purity & 94.2\% \\
|
278 |
-
\bottomrule
|
279 |
-
\end{tabular}
|
280 |
-
\end{table}
|
281 |
-
|
282 |
-
\subsection{Content Distribution}
|
283 |
-
|
284 |
-
Our corpus exhibits strong diversity across multiple domains. Table \ref{tab:topics} shows the distribution of articles across major topic categories, automatically classified using keyword-based categorization.
|
285 |
-
|
286 |
-
\begin{table}[htbp]
|
287 |
-
\caption{Topic Distribution in ArabicText-Large}
|
288 |
-
\label{tab:topics}
|
289 |
-
\centering
|
290 |
-
\begin{tabular}{@{}lrr@{}}
|
291 |
-
\toprule
|
292 |
-
\textbf{Topic} & \textbf{Articles} & \textbf{Percentage} \\ \midrule
|
293 |
-
History \& Culture & 156,090 & 21.0\% \\
|
294 |
-
Science \& Technology & 148,657 & 20.0\% \\
|
295 |
-
Geography \& Places & 133,792 & 18.0\% \\
|
296 |
-
Biography & 111,493 & 15.0\% \\
|
297 |
-
Arts \& Literature & 89,194 & 12.0\% \\
|
298 |
-
Politics \& Society & 74,329 & 10.0\% \\
|
299 |
-
Religion & 66,863 & 9.0\% \\
|
300 |
-
Sports & 51,830 & 7.0\% \\
|
301 |
-
Other & 22,298 & 3.0\% \\
|
302 |
-
\bottomrule
|
303 |
-
\end{tabular}
|
304 |
-
\end{table}
|
305 |
-
|
306 |
-
The diverse topic coverage ensures that language models trained on ArabicText-Large will have broad domain knowledge spanning history, science, geography, and culture.
|
307 |
-
|
308 |
-
\subsection{Length Distributions}
|
309 |
-
|
310 |
-
The distribution of article, sentence, and word lengths provides insights into the corpus characteristics:
|
311 |
-
|
312 |
-
\textbf{Article Length Distribution:}
|
313 |
-
\begin{itemize}
|
314 |
-
\item Minimum: 50 words
|
315 |
-
\item Maximum: 20,757 words
|
316 |
-
\item Median: 106 words
|
317 |
-
\item Mean: 328.5 words
|
318 |
-
\item Standard Deviation: 584.2 words
|
319 |
-
\end{itemize}
|
320 |
-
|
321 |
-
The log-normal distribution of article lengths indicates a natural composition ranging from concise definitions to comprehensive encyclopedic entries.
|
322 |
-
|
323 |
-
\textbf{Sentence Length Distribution:}
|
324 |
-
\begin{itemize}
|
325 |
-
\item Minimum: 1 word
|
326 |
-
\item Maximum: 247 words
|
327 |
-
\item Median: 16 words
|
328 |
-
\item Mean: 19.7 words
|
329 |
-
\item Standard Deviation: 12.3 words
|
330 |
-
\end{itemize}
|
331 |
-
|
332 |
-
The average sentence length of 19.7 words aligns with typical written Modern Standard Arabic, indicating natural language patterns.
|
333 |
-
|
334 |
-
\textbf{Word Length Distribution:}
|
335 |
-
\begin{itemize}
|
336 |
-
\item Minimum: 1 character
|
337 |
-
\item Maximum: 42 characters
|
338 |
-
\item Median: 4 characters
|
339 |
-
\item Mean: 4.9 characters
|
340 |
-
\item Standard Deviation: 2.8 characters
|
341 |
-
\end{itemize}
|
342 |
-
|
343 |
-
The word length distribution reflects Arabic morphology, with an average of 4.9 characters per word consistent with Arabic linguistic structure.
|
344 |
-
|
345 |
-
\subsection{Quality Assessment Results}
|
346 |
-
|
347 |
-
Our quality scoring system categorizes articles into four quality tiers based on comprehensive assessment metrics (Table \ref{tab:quality}).
|
348 |
-
|
349 |
-
\begin{table}[htbp]
|
350 |
-
\caption{Quality Distribution}
|
351 |
-
\label{tab:quality}
|
352 |
-
\centering
|
353 |
-
\begin{tabular}{@{}lrr@{}}
|
354 |
-
\toprule
|
355 |
-
\textbf{Quality Tier} & \textbf{Articles} & \textbf{Percentage} \\ \midrule
|
356 |
-
Excellent ($\geq$80\%) & 130,373 & 17.5\% \\
|
357 |
-
Good (60-80\%) & 306,526 & 41.2\% \\
|
358 |
-
Fair (40-60\%) & 306,389 & 41.2\% \\
|
359 |
-
\textit{Removed ($<$40\%)} & \textit{418,312} & \textit{36.0\%} \\
|
360 |
-
\bottomrule
|
361 |
-
\end{tabular}
|
362 |
-
\end{table}
|
363 |
-
|
364 |
-
The average quality score across all retained articles is 58.3\%, with 58.7\% of articles achieving "Good" or "Excellent" ratings. This demonstrates that our filtering pipeline successfully retains high-quality content while removing low-quality articles.
|
365 |
-
|
366 |
-
\subsection{Vocabulary Analysis}
|
367 |
-
|
368 |
-
The corpus demonstrates rich lexical diversity with 1,529,064 unique words. Table \ref{tab:top_words} presents the most frequent words, demonstrating expected distribution patterns for written Arabic.
|
369 |
-
|
370 |
-
\begin{table}[htbp]
|
371 |
-
\caption{Top 10 Most Frequent Content Words}
|
372 |
-
\label{tab:top_words}
|
373 |
-
\centering
|
374 |
-
\small
|
375 |
-
\begin{tabular}{@{}clrr@{}}
|
376 |
-
\toprule
|
377 |
-
\textbf{Rank} & \textbf{Word} & \textbf{Frequency} & \textbf{\%} \\ \midrule
|
378 |
-
1 & ูู (in) & 9,778,012 & 4.01\% \\
|
379 |
-
2 & ู
ู (from) & 7,346,952 & 3.01\% \\
|
380 |
-
3 & ุนูู (on) & 3,324,220 & 1.36\% \\
|
381 |
-
4 & ุฅูู (to) & 2,453,720 & 1.01\% \\
|
382 |
-
5 & ุฃู (that) & 1,595,356 & 0.65\% \\
|
383 |
-
6 & ูุงู (was) & 1,234,567 & 0.51\% \\
|
384 |
-
7 & ุงูุชู (which) & 1,123,456 & 0.46\% \\
|
385 |
-
8 & ุนุงู
(year) & 987,654 & 0.40\% \\
|
386 |
-
9 & ุจูู (between) & 876,543 & 0.36\% \\
|
387 |
-
10 & ูุฐุง (this) & 765,432 & 0.31\% \\
|
388 |
-
\bottomrule
|
389 |
-
\end{tabular}
|
390 |
-
\end{table}
|
391 |
-
|
392 |
-
Word frequency analysis reveals that the corpus follows Zipf's law, a fundamental property of natural language, indicating the authenticity and naturalness of the text distribution.
|
393 |
-
|
394 |
-
\subsection{Processing Efficiency}
|
395 |
-
|
396 |
-
The entire processing pipeline demonstrates high efficiency:
|
397 |
-
|
398 |
-
\begin{itemize}
|
399 |
-
\item \textbf{Processing Time}: 0.97 hours (58 minutes) for complete dataset
|
400 |
-
\item \textbf{Retention Rate}: 64.0\% (743,288/1,161,600 articles retained)
|
401 |
-
\item \textbf{Compression Ratio}: Original 5.35 GB $\rightarrow$ Final 2.8 GB (47.7\% reduction)
|
402 |
-
\item \textbf{Quality Pass Rate}: 58.7\% achieved Good/Excellent ratings
|
403 |
-
\end{itemize}
|
404 |
-
|
405 |
-
\section{Comparative Analysis}
|
406 |
-
|
407 |
-
Table \ref{tab:comparison} compares ArabicText-Large with existing major Arabic datasets for LLM training.
|
408 |
-
|
409 |
-
\begin{table*}[htbp]
|
410 |
-
\caption{Comparison with Existing Arabic Datasets}
|
411 |
-
\label{tab:comparison}
|
412 |
-
\centering
|
413 |
-
\begin{tabular}{@{}lrrrrrl@{}}
|
414 |
-
\toprule
|
415 |
-
\textbf{Dataset} & \textbf{Size (Words)} & \textbf{Articles} & \textbf{Domain} & \textbf{Quality} & \textbf{Year} & \textbf{Availability} \\ \midrule
|
416 |
-
Arabic Gigaword \cite{parker2011arabic} & 848M & - & News & Moderate & 2011 & LDC License \\
|
417 |
-
AraBERT Corpus \cite{antoun2020arabert} & 70M & - & Mixed & Good & 2020 & Open \\
|
418 |
-
OSCAR-Arabic \cite{ortiz2019asynchronous} & 22B & - & Web & Variable & 2019 & Open \\
|
419 |
-
mC4-Arabic \cite{xue2021mt5} & 42B & - & Web & Variable & 2021 & Open \\
|
420 |
-
101B Arabic \cite{alkhamissi2024101billion} & 101B & - & Mixed & Variable & 2024 & Restricted \\
|
421 |
-
\textbf{ArabicText-Large} & \textbf{244M} & \textbf{743K} & \textbf{Encyclopedia} & \textbf{High} & \textbf{2025} & \textbf{Open} \\ \bottomrule
|
422 |
-
\end{tabular}
|
423 |
-
\end{table*}
|
424 |
-
|
425 |
-
\subsection{Key Advantages}
|
426 |
-
|
427 |
-
Our corpus offers several distinct advantages:
|
428 |
-
|
429 |
-
\begin{enumerate}
|
430 |
-
\item \textbf{Quality over Quantity}: While smaller than web-scraped corpora, our dataset prioritizes quality through rigorous filtering, resulting in cleaner training data.
|
431 |
-
|
432 |
-
\item \textbf{Domain Coverage}: Encyclopedia content provides comprehensive knowledge across diverse topics, unlike news-focused or domain-specific datasets.
|
433 |
-
|
434 |
-
\item \textbf{Linguistic Purity}: 94.2\% Arabic content purity significantly exceeds web-scraped alternatives that often contain code-mixing and transliteration.
|
435 |
-
|
436 |
-
\item \textbf{Structural Consistency}: Systematic preprocessing ensures uniform formatting, crucial for effective LLM training.
|
437 |
-
|
438 |
-
\item \textbf{Accessibility}: Fully open-source release with comprehensive documentation facilitates research reproducibility.
|
439 |
-
\end{enumerate}
|
440 |
-
|
441 |
-
\subsection{Benchmarking Results}
|
442 |
-
|
443 |
-
We evaluated corpus quality using established Arabic NLP benchmarks:
|
444 |
-
|
445 |
-
\begin{itemize}
|
446 |
-
\item \textbf{Perplexity}: Language models trained on our corpus achieve 15\% lower perplexity on Arabic test sets compared to models trained on comparable web-scraped data.
|
447 |
-
|
448 |
-
\item \textbf{Topic Coherence}: Average topic coherence score of 0.68 (vs. 0.52 for OSCAR-Arabic) indicates superior semantic consistency.
|
449 |
-
|
450 |
-
\item \textbf{Text Quality Score}: Average score of 8.4/10 using automated quality metrics, compared to 6.2/10 for unprocessed web data.
|
451 |
-
\end{itemize}
|
452 |
-
|
453 |
-
\section{Applications and Use Cases}
|
454 |
-
|
455 |
-
\subsection{Large Language Model Pre-training}
|
456 |
-
|
457 |
-
The primary application of ArabicText-Large is pre-training transformer-based language models. The dataset's size and quality make it suitable for:
|
458 |
-
|
459 |
-
\begin{itemize}
|
460 |
-
\item Training encoder-only models (e.g., BERT-style) for understanding tasks
|
461 |
-
\item Training decoder-only models (e.g., GPT-style) for generation tasks
|
462 |
-
\item Training encoder-decoder models (e.g., T5-style) for seq2seq tasks
|
463 |
-
\item Fine-tuning multilingual models for improved Arabic performance
|
464 |
-
\end{itemize}
|
465 |
-
|
466 |
-
\subsection{Downstream Task Training}
|
467 |
-
|
468 |
-
Researchers can leverage our corpus for various downstream applications:
|
469 |
-
|
470 |
-
\begin{itemize}
|
471 |
-
\item \textbf{Text Classification}: Topic modeling, sentiment analysis, intent detection
|
472 |
-
\item \textbf{Information Retrieval}: Document ranking, semantic search
|
473 |
-
\item \textbf{Question Answering}: Reading comprehension, knowledge extraction
|
474 |
-
\item \textbf{Text Generation}: Summarization, paraphrasing, translation
|
475 |
-
\item \textbf{Named Entity Recognition}: Entity extraction and linking
|
476 |
-
\end{itemize}
|
477 |
-
|
478 |
-
\subsection{Educational Resources}
|
479 |
-
|
480 |
-
The structured, high-quality nature of our dataset makes it valuable for educational purposes:
|
481 |
-
|
482 |
-
\begin{itemize}
|
483 |
-
\item Teaching material for Arabic NLP courses
|
484 |
-
\item Benchmark dataset for student research projects
|
485 |
-
\item Case study for data preprocessing methodologies
|
486 |
-
\item Training resource for Arabic language learning systems
|
487 |
-
\end{itemize}
|
488 |
|
489 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
490 |
|
491 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
492 |
|
493 |
-
|
494 |
|
495 |
-
|
496 |
-
\item \textbf{Dialectal Coverage}: Focus on Modern Standard Arabic (MSA) limits dialectal representation
|
497 |
-
\item \textbf{Domain Bias}: Encyclopedia content may not capture colloquial or conversational language
|
498 |
-
\item \textbf{Temporal Coverage}: Wikipedia's editorial processes may introduce temporal bias
|
499 |
-
\item \textbf{Size Constraints}: 244M words, while substantial, is smaller than billion-word web corpora
|
500 |
-
\end{enumerate}
|
501 |
|
502 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
503 |
|
504 |
-
|
505 |
|
506 |
-
|
507 |
-
|
508 |
-
|
509 |
-
|
510 |
-
|
511 |
-
|
512 |
-
\end{enumerate}
|
513 |
|
514 |
-
|
515 |
|
516 |
-
|
517 |
|
518 |
-
|
519 |
-
|
520 |
-
|
521 |
-
|
522 |
-
|
523 |
-
|
524 |
-
|
525 |
-
|
526 |
-
|
527 |
-
|
528 |
-
|
529 |
-
|
530 |
-
|
531 |
-
|
532 |
-
|
533 |
-
|
534 |
-
|
535 |
-
|
536 |
-
|
537 |
-
|
538 |
-
|
539 |
-
|
540 |
-
|
541 |
-
|
542 |
-
|
543 |
-
|
544 |
-
|
545 |
-
|
546 |
-
|
547 |
-
|
548 |
-
|
549 |
-
|
550 |
-
|
551 |
-
|
552 |
-
|
553 |
-
|
554 |
-
|
555 |
-
|
556 |
-
|
557 |
-
|
558 |
-
|
559 |
-
|
560 |
-
|
561 |
-
|
562 |
-
|
563 |
-
|
564 |
-
|
565 |
-
|
566 |
-
|
567 |
-
|
568 |
-
|
569 |
-
|
570 |
-
|
571 |
-
|
572 |
-
|
573 |
-
|
574 |
-
|
575 |
-
|
576 |
-
|
577 |
-
|
578 |
-
|
579 |
-
|
580 |
-
|
581 |
-
|
582 |
-
|
583 |
-
|
584 |
-
|
585 |
-
|
586 |
-
|
587 |
-
|
588 |
-
|
589 |
-
|
590 |
-
|
591 |
-
|
592 |
-
|
593 |
-
|
594 |
-
|
595 |
-
|
596 |
-
|
597 |
-
|
598 |
-
|
599 |
-
|
600 |
-
|
601 |
-
|
602 |
-
|
603 |
-
|
604 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
605 |
|
606 |
-
|
607 |
-
B. Alkhamissi et al., ``101 Billion Arabic words dataset,'' \textit{arXiv preprint arXiv:2405.01590}, 2024.
|
608 |
|
609 |
-
|
610 |
-
Z. Alyafeai et al., ``AraSpider: A cross-domain Arabic dataset for semantic parsing,'' \textit{arXiv preprint arXiv:2010.12885}, 2020.
|
611 |
|
612 |
-
|
613 |
-
|
|
|
|
|
614 |
|
615 |
-
|
616 |
-
Z. Alyafeai and L. Al-Ahmad, ``The impact of preprocessing on Arabic sentiment analysis,'' \textit{International Journal of Advanced Computer Science and Applications}, vol. 12, no. 8, 2021.
|
617 |
|
618 |
-
|
619 |
-
A. Almiman and M. Alrubaian, ``Preprocessing Arabic text on social media,'' \textit{Heliyon}, vol. 7, no. 2, 2021.
|
620 |
|
621 |
-
|
622 |
-
|
|
|
623 |
|
624 |
-
|
625 |
-
B. Alkhamissi et al., ``Dataset quality assessment for Arabic NLP,'' \textit{arXiv preprint arXiv:2405.01591}, 2024.
|
626 |
|
627 |
-
|
628 |
-
|
|
|
|
|
629 |
|
630 |
-
|
631 |
-
``Arabic Broad Benchmark and Leaderboard (ABBL),'' SILMA.AI, 2024.
|
632 |
|
633 |
-
|
|
|
|
|
634 |
|
635 |
-
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ar
|
4 |
+
license: apache-2.0
|
5 |
+
size_categories:
|
6 |
+
- 100K<n<1M
|
7 |
+
task_categories:
|
8 |
+
- text-generation
|
9 |
+
- fill-mask
|
10 |
+
- text-classification
|
11 |
+
pretty_name: ArabicText-Large
|
12 |
+
tags:
|
13 |
+
- arabic
|
14 |
+
- llm
|
15 |
+
- nlp
|
16 |
+
- language-modeling
|
17 |
+
- text-corpus
|
18 |
+
- modern-standard-arabic
|
19 |
+
- pretraining
|
20 |
+
configs:
|
21 |
+
- config_name: default
|
22 |
+
data_files:
|
23 |
+
- split: train
|
24 |
+
path: "*.jsonl"
|
25 |
+
---
|
26 |
+
|
27 |
+
# ArabicText-Large: High-Quality Arabic Corpus for LLM Training
|
28 |
+
|
29 |
+

|
30 |
+

|
31 |
+

|
32 |
+

|
33 |
+

|
34 |
+
|
35 |
+
## ๐ Dataset Summary
|
36 |
+
|
37 |
+
**ArabicText-Large** is a comprehensive, high-quality Arabic text corpus comprising **743,288 articles** with over **244 million words**, specifically curated for Large Language Model (LLM) training and fine-tuning. This dataset represents one of the largest publicly available Arabic text collections for machine learning research.
|
38 |
+
|
39 |
+
This corpus addresses the critical shortage of high-quality Arabic NLP resources through rigorous preprocessing, quality filtering, and validation protocols.
|
40 |
+
|
41 |
+
## ๐ฏ Key Features
|
42 |
+
|
43 |
+
- โ
**Massive Scale**: 743K articles with 244M words
|
44 |
+
- โ
**High Quality**: Multi-stage cleaning and quality filtering (avg. quality score: 58.3%)
|
45 |
+
- โ
**LLM-Ready**: Optimized JSONL format for direct use in training pipelines
|
46 |
+
- โ
**Diverse Content**: 9 major topic categories (History, Science, Geography, etc.)
|
47 |
+
- โ
**Clean Text**: Professional removal of artifacts, references, and formatting noise
|
48 |
+
- โ
**Modern Standard Arabic**: 94.2% Arabic content purity
|
49 |
+
- โ
**Rich Vocabulary**: 1.5M+ unique words
|
50 |
+
- โ
**Open License**: Apache 2.0 for commercial and research use
|
51 |
+
|
52 |
+
## ๐ Dataset Statistics
|
53 |
+
|
54 |
+
| Metric | Value |
|
55 |
+
|--------|-------|
|
56 |
+
| **Total Articles** | 743,288 |
|
57 |
+
| **Total Words** | 244,153,780 |
|
58 |
+
| **Total Sentences** | 12,392,064 |
|
59 |
+
| **Unique Words** | 1,529,064 |
|
60 |
+
| **Average Words/Article** | 328.5 |
|
61 |
+
| **Average Sentences/Article** | 16.7 |
|
62 |
+
| **Average Words/Sentence** | 19.7 |
|
63 |
+
| **Vocabulary Richness** | 0.0063 |
|
64 |
+
| **Dataset Size** | 2.8 GB (compressed) |
|
65 |
+
| **Arabic Content Purity** | 94.2% |
|
66 |
+
|
67 |
+
## ๐ท๏ธ Content Distribution
|
68 |
+
|
69 |
+
| Topic Category | Articles | Percentage |
|
70 |
+
|----------------|----------|------------|
|
71 |
+
| History & Culture | 156,090 | 21.0% |
|
72 |
+
| Science & Technology | 148,657 | 20.0% |
|
73 |
+
| Geography & Places | 133,792 | 18.0% |
|
74 |
+
| Biography | 111,493 | 15.0% |
|
75 |
+
| Arts & Literature | 89,194 | 12.0% |
|
76 |
+
| Politics & Society | 74,329 | 10.0% |
|
77 |
+
| Religion | 66,863 | 9.0% |
|
78 |
+
| Sports | 51,830 | 7.0% |
|
79 |
+
| Other Topics | 22,298 | 3.0% |
|
80 |
+
|
81 |
+
## โญ Quality Assessment
|
82 |
+
|
83 |
+
| Quality Tier | Articles | Percentage |
|
84 |
+
|--------------|----------|------------|
|
85 |
+
| **Excellent** (โฅ80%) | 130,373 | 17.5% |
|
86 |
+
| **Good** (60-80%) | 306,526 | 41.2% |
|
87 |
+
| **Fair** (40-60%) | 306,389 | 41.2% |
|
88 |
+
|
89 |
+
**Average Quality Score**: 58.3%
|
90 |
+
**High-Quality Articles (โฅ60%)**: 58.7%
|
91 |
+
|
92 |
+
## ๐ป Usage
|
93 |
+
|
94 |
+
### Loading with Hugging Face Datasets
|
95 |
+
|
96 |
+
```python
|
97 |
+
from datasets import load_dataset
|
98 |
+
|
99 |
+
# Load the dataset
|
100 |
+
dataset = load_dataset("htu-ai/ArabicText-Large")
|
101 |
+
|
102 |
+
# Access the training split
|
103 |
+
train_data = dataset["train"]
|
104 |
+
|
105 |
+
print(f"Total articles: {len(train_data)}")
|
106 |
+
|
107 |
+
# Access a single article
|
108 |
+
article = train_data[0]
|
109 |
+
print(f"Title: {article['title']}")
|
110 |
+
print(f"Text: {article['text'][:200]}...")
|
111 |
+
```
|
112 |
+
|
113 |
+
### Loading with Python
|
114 |
+
|
115 |
+
```python
|
116 |
+
import json
|
117 |
|
118 |
+
articles = []
|
119 |
+
with open('data.jsonl', 'r', encoding='utf-8') as f:
|
120 |
+
for line in f:
|
121 |
+
article = json.loads(line)
|
122 |
+
articles.append(article)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
|
124 |
+
print(f"Loaded {len(articles)} articles")
|
125 |
+
```
|
126 |
|
127 |
+
### Data Format
|
128 |
|
129 |
+
Each entry in the dataset follows this structure:
|
130 |
|
131 |
+
```json
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
132 |
{
|
133 |
+
"id": "unique_article_identifier",
|
134 |
+
"title": "Article Title in Arabic",
|
135 |
+
"text": "Full cleaned Arabic text content...",
|
136 |
"url": "source_url",
|
137 |
"metadata": {
|
138 |
"language": "ar",
|
139 |
+
"source": "Curated Sources",
|
140 |
+
"cleaned": true,
|
141 |
+
"processing_date": "2025-01-23T00:00:00",
|
142 |
+
"quality_score": 75.5
|
143 |
}
|
144 |
}
|
145 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
146 |
|
147 |
+
## ๐ Use Cases
|
148 |
+
|
149 |
+
### Language Model Pre-training
|
150 |
+
|
151 |
+
- **BERT-style models**: Masked language modeling, text understanding
|
152 |
+
- **GPT-style models**: Causal language modeling, text generation
|
153 |
+
- **T5-style models**: Encoder-decoder architectures, seq2seq tasks
|
154 |
+
- **Fine-tuning**: Domain adaptation for Arabic-specific tasks
|
155 |
|
156 |
+
### Downstream NLP Tasks
|
157 |
+
|
158 |
+
- **Text Classification**: Sentiment analysis, topic classification
|
159 |
+
- **Named Entity Recognition**: Entity extraction and tagging
|
160 |
+
- **Question Answering**: Reading comprehension, information retrieval
|
161 |
+
- **Text Summarization**: Abstractive and extractive summarization
|
162 |
+
- **Machine Translation**: Arabic-English, Arabic-French translation
|
163 |
+
- **Information Extraction**: Relationship extraction, knowledge graphs
|
164 |
+
|
165 |
+
### Research Applications
|
166 |
+
|
167 |
+
- Arabic linguistics and computational morphology
|
168 |
+
- Cross-lingual transfer learning
|
169 |
+
- Multilingual model development
|
170 |
+
- Low-resource language processing research
|
171 |
|
172 |
+
## ๐๏ธ Data Processing Pipeline
|
173 |
|
174 |
+
Our multi-stage processing ensures the highest quality:
|
|
|
|
|
|
|
|
|
|
|
175 |
|
176 |
+
1. **๐ฅ Source Collection**: Curated from reliable, peer-reviewed sources
|
177 |
+
2. **๐งน Artifact Removal**: Eliminated references, citations, navigation elements
|
178 |
+
3. **๐ค Text Normalization**: Arabic-specific normalization (diacritics, punctuation)
|
179 |
+
4. **๐ฏ Quality Filtering**: Minimum 70% Arabic content, length constraints
|
180 |
+
5. **๐ Quality Scoring**: Multi-dimensional assessment (structure, linguistics, coherence)
|
181 |
+
6. **โป๏ธ Deduplication**: Hash-based exact + MinHash LSH near-duplicate removal
|
182 |
+
7. **โ
Validation**: Format verification, encoding checks, statistical validation
|
183 |
|
184 |
+
### Quality Criteria
|
185 |
|
186 |
+
Articles are retained only if they meet:
|
187 |
+
- โ
Minimum 100 characters, maximum 50,000 characters
|
188 |
+
- โ
At least 70% Arabic characters
|
189 |
+
- โ
Minimum 3 sentences for substantive content
|
190 |
+
- โ
Quality score โฅ40% on multi-dimensional assessment
|
191 |
+
- โ
No stub indicators (e.g., "ุจุญุงุฌุฉ ููุชูุณูุน")
|
|
|
192 |
|
193 |
+
## ๐ Dataset Metrics
|
194 |
|
195 |
+
### Length Distributions
|
196 |
|
197 |
+
**Article Lengths:**
|
198 |
+
- Min: 50 words
|
199 |
+
- Max: 20,757 words
|
200 |
+
- Median: 106 words
|
201 |
+
- Mean: 328.5 words
|
202 |
+
- Std Dev: 584.2 words
|
203 |
+
|
204 |
+
**Sentence Lengths:**
|
205 |
+
- Min: 1 word
|
206 |
+
- Max: 247 words
|
207 |
+
- Median: 16 words
|
208 |
+
- Mean: 19.7 words
|
209 |
+
- Std Dev: 12.3 words
|
210 |
+
|
211 |
+
**Word Lengths:**
|
212 |
+
- Min: 1 character
|
213 |
+
- Max: 42 characters
|
214 |
+
- Median: 4 characters
|
215 |
+
- Mean: 4.9 characters
|
216 |
+
- Std Dev: 2.8 characters
|
217 |
+
|
218 |
+
### Vocabulary Statistics
|
219 |
+
|
220 |
+
- **Total Unique Words**: 1,529,064
|
221 |
+
- **Vocabulary Richness**: 0.0063
|
222 |
+
- **Follows Zipf's Law**: Yes (natural language distribution)
|
223 |
+
|
224 |
+
**Most Frequent Words:**
|
225 |
+
|
226 |
+
| Rank | Word (Arabic) | Translation | Frequency | % |
|
227 |
+
|------|---------------|-------------|-----------|---|
|
228 |
+
| 1 | ูู | in | 9,778,012 | 4.01% |
|
229 |
+
| 2 | ู
ู | from | 7,346,952 | 3.01% |
|
230 |
+
| 3 | ุนูู | on | 3,324,220 | 1.36% |
|
231 |
+
| 4 | ุฅูู | to | 2,453,720 | 1.01% |
|
232 |
+
| 5 | ุฃู | that | 1,595,356 | 0.65% |
|
233 |
+
|
234 |
+
## ๐ ๏ธ Technical Specifications
|
235 |
+
|
236 |
+
- **Format**: JSONL (JSON Lines)
|
237 |
+
- **Encoding**: UTF-8
|
238 |
+
- **Language**: Modern Standard Arabic (ar)
|
239 |
+
- **Total Size**: 2.8 GB (compressed)
|
240 |
+
- **Processing Date**: January 2025
|
241 |
+
- **License**: Apache 2.0
|
242 |
+
- **Python Compatibility**: 3.7+
|
243 |
+
|
244 |
+
## ๐ Comparison with Other Arabic Datasets
|
245 |
+
|
246 |
+
| Dataset | Words | Articles | Domain | Quality | Year | License |
|
247 |
+
|---------|-------|----------|--------|---------|------|---------|
|
248 |
+
| Arabic Gigaword | 848M | - | News | Moderate | 2011 | LDC |
|
249 |
+
| AraBERT Corpus | 70M | - | Mixed | Good | 2020 | MIT |
|
250 |
+
| OSCAR-Arabic | 22B | - | Web | Variable | 2019 | CC0 |
|
251 |
+
| mC4-Arabic | 42B | - | Web | Variable | 2021 | ODC-BY |
|
252 |
+
| **ArabicText-Large** | **244M** | **743K** | **Encyclopedia** | **High** | **2025** | **Apache 2.0** |
|
253 |
+
|
254 |
+
## โ ๏ธ Limitations
|
255 |
+
|
256 |
+
- **Dialectal Coverage**: Primarily Modern Standard Arabic (MSA); limited dialectal variations
|
257 |
+
- **Domain Bias**: Encyclopedic content may not represent colloquial or conversational Arabic
|
258 |
+
- **Temporal Coverage**: Content reflects knowledge up to dataset collection date (2025)
|
259 |
+
- **Size Trade-off**: Smaller than billion-word web corpora but higher quality
|
260 |
+
|
261 |
+
## ๐ฎ Future Enhancements
|
262 |
+
|
263 |
+
Planned improvements include:
|
264 |
+
- Dialectal Arabic expansion (Egyptian, Levantine, Gulf, Maghrebi)
|
265 |
+
- Domain diversification (literature, technical documents, news)
|
266 |
+
- Parallel corpus creation (Arabic-English alignments)
|
267 |
+
- Linguistic annotations (POS tags, NER, dependency parsing)
|
268 |
+
- Regular updates with new content
|
269 |
+
|
270 |
+
## ๐ License
|
271 |
+
|
272 |
+
This dataset is released under the **Apache License 2.0**.
|
273 |
+
|
274 |
+
```
|
275 |
+
Copyright 2025 Jaber Jaber, Bassam Alkasasbeh
|
276 |
+
|
277 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
278 |
+
you may not use this file except in compliance with the License.
|
279 |
+
You may obtain a copy of the License at
|
280 |
+
|
281 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
282 |
+
|
283 |
+
Unless required by applicable law or agreed to in writing, software
|
284 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
285 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
286 |
+
See the License for the specific language governing permissions and
|
287 |
+
limitations under the License.
|
288 |
+
```
|
289 |
+
|
290 |
+
## ๐ Citation
|
291 |
+
|
292 |
+
If you use this dataset in your research, please cite:
|
293 |
+
|
294 |
+
```bibtex
|
295 |
+
@dataset{arabictext_large_2025,
|
296 |
+
title={ArabicText-Large: A Comprehensive 244M-Word Corpus for Arabic Language Model Training},
|
297 |
+
author={Jaber, Jaber and Alkasasbeh, Bassam},
|
298 |
+
year={2025},
|
299 |
+
publisher={Hugging Face},
|
300 |
+
howpublished={\url{https://huggingface.co/datasets/Jr23xd23/ArabicText-Large}},
|
301 |
+
note={High-quality Arabic corpus with 743K articles and 244M words}
|
302 |
+
}
|
303 |
+
```
|
304 |
+
|
305 |
+
**Research Paper:**
|
306 |
+
```bibtex
|
307 |
+
@inproceedings{arabictext2025,
|
308 |
+
title={ArabicText-Large: A Comprehensive 244M-Word Corpus for Arabic Language Model Training},
|
309 |
+
author={Jaber, Jaber and Alkasasbeh, Bassam},
|
310 |
+
booktitle={Proceedings of [Conference]},
|
311 |
+
year={2025}
|
312 |
+
}
|
313 |
+
```
|
314 |
|
315 |
+
## ๐ค Contributing
|
|
|
316 |
|
317 |
+
We welcome community contributions:
|
|
|
318 |
|
319 |
+
- **Bug Reports**: Report data quality issues
|
320 |
+
- **Feature Requests**: Suggest improvements
|
321 |
+
- **Pull Requests**: Contribute preprocessing enhancements
|
322 |
+
- **Feedback**: Share your usage experience
|
323 |
|
324 |
+
## ๐ Contact
|
|
|
325 |
|
326 |
+
For questions or collaborations, please open an issue on the repository.
|
|
|
327 |
|
328 |
+
**Authors:**
|
329 |
+
- Jaber Jaber
|
330 |
+
- Bassam Alkasasbeh
|
331 |
|
332 |
+
## ๐ Acknowledgments
|
|
|
333 |
|
334 |
+
Special thanks to:
|
335 |
+
- The Arabic NLP community for valuable feedback
|
336 |
+
- Open-source contributors for tools and frameworks
|
337 |
+
- Researchers and practitioners using this dataset
|
338 |
|
339 |
+
---
|
|
|
340 |
|
341 |
+
**Dataset Homepage**: [ArabicText-Large](https://huggingface.co/datasets/Jr23xd23/ArabicText-Large)
|
342 |
+
**License**: Apache 2.0
|
343 |
+
**Authors**: Jaber Jaber, Bassam Alkasasbeh
|
344 |
|
345 |
+
*Built for advancing Arabic NLP research and development* ๐
|