File size: 4,587 Bytes
d7ed72c
 
 
 
 
 
 
 
 
 
 
 
 
d1d7692
 
d7ed72c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---

# NLP: Sentiment Classification Dataset 

This is a bundle dataset for a NLP task of sentiment classification in English.

There is a sample project is using this dataset [GURA-gru-unit-for-recognizing-affect](https://github.com/NatLee/GURA-gru-unit-for-recognizing-affect).

## Content

- `myanimelist-sts`: This dataset is derived from MyAnimeList, a social networking and cataloging service for anime and manga fans. The dataset typically includes user reviews with ratings. We used [skip-thoughts](https://pypi.org/project/skip-thoughts/) to summarize them. You can find the original source of the dataset [myanimelist-comment-dataset](https://www.kaggle.com/datasets/natlee/myanimelist-comment-dataset) and the version is `2023-05-11`.

- `aclImdb`: The ACL IMDB dataset is a large movie review dataset collected for sentiment analysis tasks. It contains 50,000 highly polar movie reviews, divided evenly into 25,000 training and 25,000 test sets. Each set includes an equal number of positive and negative reviews. The source is from [sentiment](https://ai.stanford.edu/~amaas/data/sentiment/)

- `MR`: Movie Review Data (MR) is a dataset that contains 5,331 positive and 5,331 negative processed sentences/lines. This dataset is suitable for binary sentiment classification tasks, and it's a good starting point for text classification models. You can find the source [movie-review-data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) and the section is `Sentiment scale datasets`.

- `MPQA`: The Multi-Perspective Question Answering (MPQA) dataset is a resource for opinion detection and sentiment analysis research. It consists of news articles from a wide variety of sources annotated for opinions and other private states. You can get the source from [MPQA](https://mpqa.cs.pitt.edu/)

- `SST2`: The Stanford Sentiment Treebank version 2 (SST2) is a popular benchmark for sentence-level sentiment analysis. It includes movie review sentences with corresponding sentiment labels (positive or negative). You can obtain the dataset from [SST2](https://huggingface.co/datasets/sst2)

- `SUBJ`: The Subjectivity dataset is used for sentiment analysis research. It consists of 5000 subjective and 5000 objective processed sentences, which can help a model to distinguish between subjective and objective (factual) statements. You can find the source [movie-review-data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) and the section is `Subjectivity datasets`.


# Tokenizer

```python
from pathlib import Path
import pickle
from tensorflow.keras.preprocessing.text import Tokenizer

def check_data_path(file_path:str) -> bool:
    if Path(file_path).exists():
        print(f'[Path][OK] {file_path}')
        return True
    print(f'[Path][FAILED] {file_path}')
    return False

sentences = []

# =====================
# Anime Reviews
# =====================
dataset = './myanimelist-sts.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        X, Y = pickle.load(p)
        sentences.extend(X)
        sentences.extend(Y)


# =====================
# MPQA
# =====================
dataset = './MPQA.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        mpqa = pickle.load(p)
        sentences.extend(list(mpqa.sentence))


# =====================
# IMDB
# =====================
dataset = './aclImdb.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        x_test, y_test, x_train, y_train = pickle.load(p)
        sentences.extend(x_train)
        sentences.extend(y_train)

# =====================
# MR
# =====================
dataset = './MR.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        mr = pickle.load(p)
        sentences.extend(list(mr.sentence))

# =====================
# SST2
# =====================
dataset = './SST2.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        sst2 = pickle.load(p)
        sentences.extend(list(sst2.sentence))

# =====================
# SUBJ
# =====================
dataset = './SUBJ.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        subj = pickle.load(p)
        sentences.extend(list(subj.sentence))

sentences = map(str, sentences)

#Tokenize the sentences
myTokenizer = Tokenizer(
    num_words = 100,
    oov_token="{OOV}"
)
myTokenizer.fit_on_texts(sentences)
print(myTokenizer.word_index)

with open('./big-tokenizer.pkl', 'wb') as p:
    pickle.dump(myTokenizer, p)

```