entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
mubarak-etal-2022-overview
Overview of {OSACT}5 Shared Task on {A}rabic Offensive Language and Hate Speech Detection
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.20/
Mubarak, Hamdy and Al-Khalifa, Hend and Al-Thubaity, Abdulmohsen
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
162--166
This paper provides an overview of the shard task on detecting offensive language, hate speech, and fine-grained hate speech at the fifth workshop on Open-Source Arabic Corpora and Processing Tools (OSACT5). The shared task comprised of three subtasks; Subtask A, involving the detection of offensive language, which contains socially unacceptable or impolite content including any kind of explicit or implicit insults or attacks against individuals or groups; Subtask B, involving the detection of hate speech, which contains offensive language targeting individuals or groups based on common characteristics such as race, religion, gender, etc.; and Subtask C, involving the detection of the fine-grained type of hate speech which takes one value from the following types: (i) race/ethnicity/nationality, (ii) religion/belief, (iii) ideology, (iv) disability/disease, (v) social class, and (vi) gender. In total, 40 teams signed up to participate in Subtask A, and 17 of them submitted test runs. For Subtask B, 26 teams signed up to participate and 12 of them submitted runs. And for Subtask C, 23 teams signed up to participate and 10 of them submitted runs. 10 teams submitted papers describing their participation in one subtask or more, and 8 papers were accepted. We present and analyze all submissions in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,491
inproceedings
mostafa-etal-2022-gof
{GOF} at {A}rabic Hate Speech 2022: Breaking The Loss Function Convention For Data-Imbalanced {A}rabic Offensive Text Detection
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.21/
Mostafa, Ali and Mohamed, Omar and Ashraf, Ali
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
167--175
With the rise of social media platforms, we need to ensure that all users have a secure online experience by eliminating and identifying offensive language and hate speech. Furthermore, detecting such content is challenging, particularly in the Arabic language, due to a number of challenges and limitations. In general, one of the most challenging issues in real-world datasets is long-tailed data distribution. We report our submission to the Offensive Language and hate-speech Detection shared task organized with the 5th Workshop on Open-Source Arabic Corpora and Processing Tools Arabic (OSACT5); in our approach, we focused on how to overcome such a problem by experimenting with alternative loss functions rather than using the traditional weighted cross-entropy loss. Finally, we evaluated various pre-trained deep learning models using the suggested loss functions to determine the optimal model. On the development and test sets, our final model achieved 86.97{\%} and 85.17{\%}, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,492
inproceedings
bennessir-etal-2022-icompass
i{C}ompass at {A}rabic Hate Speech 2022: Detect Hate Speech Using {QRNN} and Transformers
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.22/
Bennessir, Mohamed Aziz and Rhouma, Malek and Haddad, Hatem and Fourati, Chayma
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
176--180
This paper provides a detailed overview of the system we submitted as part of the OSACT2022 Shared Tasks on Fine-Grained Hate Speech Detection on Arabic Twitter, its outcome, and limitations. Our submission is accomplished with a hard parameter sharing Multi-Task Model that consisted of a shared layer containing state-of-the-art contextualized text representation models such as MarBERT, AraBERT, ArBERT and task specific layers that were fine-tuned with Quasi-recurrent neural networks (QRNN) for each down-stream subtask. The results show that MARBERT fine-tuned with QRNN outperforms all of the previously mentioned models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,493
inproceedings
magnossao-de-paula-etal-2022-upv
{UPV} at the {A}rabic Hate Speech 2022 Shared Task: Offensive Language and Hate Speech Detection using Transformers and Ensemble Models
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.23/
Magnoss{\~a}o de Paula, Angel Felipe and Rosso, Paolo and Bensalem, Imene and Zaghouani, Wajdi
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
181--185
This paper describes our participation in the shared task Fine-Grained Hate Speech Detection on Arabic Twitter at the 5th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT). The shared task is divided into three detection subtasks: (i) Detect whether a tweet is offensive or not; (ii) Detect whether a tweet contains hate speech or not; and (iii) Detect the fine-grained type of hate speech (race, religion, ideology, disability, social class, and gender). It is an effort toward the goal of mitigating the spread of offensive language and hate speech in Arabic-written content on social media platforms. To solve the three subtasks, we employed six different transformer versions: AraBert, AraElectra, Albert-Arabic, AraGPT2, mBert, and XLM-Roberta. We experimented with models based on encoder and decoder blocks and models exclusively trained on Arabic and also on several languages. Likewise, we applied two ensemble methods: Majority vote and Highest sum. Our approach outperformed the official baseline in all the subtasks, not only considering F1-macro results but also accuracy, recall, and precision. The results suggest that the Highest sum is an excellent approach to encompassing transformer output to create an ensemble since this method offered at least top-two F1-macro values across all the experiments performed on development and test data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,494
inproceedings
alkhamissi-diab-2022-meta
Meta {AI} at {A}rabic Hate Speech 2022: {M}ulti{T}ask Learning with Self-Correction for Hate Speech Classification
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.24/
AlKhamissi, Badr and Diab, Mona
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
186--193
In this paper, we tackle the Arabic Fine-Grained Hate Speech Detection shared task and demonstrate significant improvements over reported baselines for its three subtasks. The tasks are to predict if a tweet contains (1) Offensive language; and whether it is considered (2) Hate Speech or not and if so, then predict the (3) Fine-Grained Hate Speech label from one of six categories. Our final solution is an ensemble of models that employs multitask learning and a self-consistency correction method yielding 82.7{\%} on the hate speech subtask{---}reflecting a 3.4{\%} relative improvement compared to previous work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,495
inproceedings
makram-etal-2022-chillax
{CHILLAX} - at {A}rabic Hate Speech 2022: A Hybrid Machine Learning and Transformers based Model to Detect {A}rabic Offensive and Hate Speech
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.25/
Makram, Kirollos and Nessim, Kirollos George and Abd-Almalak, Malak Emad and Roshdy, Shady Zekry and Salem, Seif Hesham and Thabet, Fady Fayek and Mohamed, Ensaf Hussien
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
194--199
Hate speech and offensive language have become a crucial problem nowadays due to the extensive usage of social media by people of different gender, nationality, religion and other types of characteristics allowing anyone to share their thoughts and opinions. In this research paper, We proposed a hybrid model for the first and second tasks of OSACT2022. This model used the Arabic pre-trained Bert language model MARBERT for feature extraction of the Arabic tweets in the dataset provided by the OSACT2022 shared task, then fed the features to two classic machine learning classifiers (Logistic Regression, Random Forest). The best results achieved for the offensive tweet detection task were by the Logistic Regression model with accuracy, precision, recall, and f1-score of 80{\%}, 78{\%}, 78{\%}, and 78{\%}, respectively. The results for the hate speech tweet detection task were 89{\%}, 72{\%}, 80{\%}, and 76{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,496
inproceedings
shapiro-etal-2022-alexu
{A}lex{U}-{AIC} at {A}rabic Hate Speech 2022: Contrast to Classify
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.26/
Shapiro, Ahmad and Khalafallah, Ayman and Torki, Marwan
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
200--208
Online presence on social media platforms such as Facebook and Twitter has become a daily habit for internet users. Despite the vast amount of services the platforms offer for their users, users suffer from cyber-bullying, which further leads to mental abuse and may escalate to cause physical harm to individuals or targeted groups. In this paper, we present our submission to the Arabic Hate Speech 2022 Shared Task Workshop (OSACT5 2022) using the associated Arabic Twitter dataset. The Shared Task consists of 3 Sub-tasks, Sub-task A focuses on detecting whether the tweet is Offensive or not. Then, For offensive Tweets, Sub-task B focuses on detecting whether the tweet is Hate Speech or not. Finally, For Hate Speech Tweets, Sub-task C focuses on detecting the fine-grained type of hate speech among six different classes. Transformer models proved their efficiency in classification tasks, but with the problem of over-fitting when fine-tuned on a small or an imbalanced dataset. We overcome this limitation by investigating multiple training paradigms such as Contrastive learning and Multi-task learning along with classification fine-tuning and an ensemble of our top 5 performers. Our proposed solution achieved 0.841, 0.817, and 0.476 macro F1-average in sub-tasks A, B, and C respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,497
inproceedings
elkaref-abu-elkheir-2022-guct
{GUCT} at {A}rabic Hate Speech 2022: Towards a Better Isotropy for Hatespeech Detection
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.27/
Elkaref, Nehal and Abu-Elkheir, Mervat
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
209--213
Hate Speech is an increasingly common occurrence in verbal and textual exchanges on online platforms, where many users, especially those from vulnerable minorities, are in danger of being attacked or harassed via text messages, posts, comments, or articles. Therefore, it is crucial to detect and filter out hate speech in the various forms of text encountered on online and social platforms. In this paper, we present our work on the shared task of detecting hate speech in dialectical Arabic tweets as part of the OSACT shared task on Fine-grained Hate Speech Detection. Normally, tweets have a short length, and hence do not have sufficient context for language models, which in turn makes a classification task challenging. To contribute to sub-task A, we leverage MARBERT`s pre-trained contextual word representations and aim to improve their semantic quality using a cluster-based approach. Our work explores MARBERT`s embedding space and assess its geometric properties in-order to achieve better representations and subsequently better classification performance. We propose to improve the isotropic word representations of MARBERT via clustering. we compare the word representations generated by our approach to MARBERT`s default word representations via feeding each to a bidirectional LSTM to detect offensive and non-offensive tweets. Our results show that enhancing the isotropy of an embedding space can boost performance. Our system scores 81.2{\%} on accuracy and a macro-averaged F1 score of 79.1{\%} on sub-task A`s development set and achieves 76.5{\%} for accuracy and an F1 score of 74.2{\%} on the test set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,498
inproceedings
alzubi-etal-2022-aixplain
ai{X}plain at {A}rabic Hate Speech 2022: An Ensemble Based Approach to Detecting Offensive Tweets
Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.osact-1.28/
Alzubi, Salaheddin and Ferreira, Thiago Castro and Pavanelli, Lucas and Al-Badrashiny, Mohamed
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection
214--217
Abusive speech on online platforms has a detrimental effect on users' mental health. This warrants the need for innovative solutions that automatically moderate content, especially on online platforms such as Twitter where a user`s anonymity is loosely controlled. This paper outlines aiXplain Inc.`s ensemble based approach to detecting offensive speech in the Arabic language based on OSACT5`s shared sub-task A. Additionally, this paper highlights multiple challenges that may hinder progress on detecting abusive speech and provides potential avenues and techniques that may lead to significant progress.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,499
inproceedings
park-jeoung-2022-raison
Raison d'{\^e}tre of the benchmark dataset: A Survey of Current Practices of Benchmark Dataset Sharing Platforms
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.1/
Park, Jaihyun and Jeoung, Sullam
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
1--10
This paper critically examines the current practices of benchmark dataset sharing in NLP and suggests a better way to inform reusers of the benchmark dataset. As the dataset sharing platform plays a key role not only in distributing the dataset but also in informing the potential reusers about the dataset, we believe data-sharing platforms should provide a comprehensive context of the datasets. We survey four benchmark dataset sharing platforms: HuggingFace, PaperswithCode, Tensorflow, and Pytorch to diagnose the current practices of how the dataset is shared which metadata is shared and omitted. To be specific, drawing on the concept of data curation which considers the future reuse when the data is made public, we advance the direction that benchmark dataset sharing platforms should take into consideration. We identify that four benchmark platforms have different practices of using metadata and there is a lack of consensus on what social impact metadata is. We believe the problem of missing a discussion around social impact in the dataset sharing platforms has to do with the failed agreement on who should be in charge. We propose that the benchmark dataset should develop social impact metadata and data curator should take a role in managing the social impact metadata.
null
null
10.18653/v1/2022.nlppower-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,509
inproceedings
you-lowd-2022-towards
Towards Stronger Adversarial Baselines Through Human-{AI} Collaboration
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.2/
You, Wencong and Lowd, Daniel
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
11--21
Natural language processing (NLP) systems are often used for adversarial tasks such as detecting spam, abuse, hate speech, and fake news. Properly evaluating such systems requires dynamic evaluation that searches for weaknesses in the model, rather than a static test set. Prior work has evaluated such models on both manually and automatically generated examples, but both approaches have limitations: manually constructed examples are time-consuming to create and are limited by the imagination and intuition of the creators, while automatically constructed examples are often ungrammatical or labeled inconsistently. We propose to combine human and AI expertise in generating adversarial examples, benefiting from humans' expertise in language and automated attacks' ability to probe the target system more quickly and thoroughly. We present a system that facilitates attack construction, combining human judgment with automated attacks to create better attacks more efficiently. Preliminary results from our own experimentation suggest that human-AI hybrid attacks are more effective than either human-only or AI-only attacks. A complete user study to validate these hypotheses is still pending.
null
null
10.18653/v1/2022.nlppower-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,510
inproceedings
naseem-etal-2022-benchmarking
Benchmarking for Public Health Surveillance tasks on Social Media with a Domain-Specific Pretrained Language Model
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.3/
Naseem, Usman and Lee, Byoung Chan and Khushi, Matloob and Kim, Jinman and Dunn, Adam
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
22--31
A user-generated text on social media enables health workers to keep track of information, identify possible outbreaks, forecast disease trends, monitor emergency cases, and ascertain disease awareness and response to official health correspondence. This exchange of health information on social media has been regarded as an attempt to enhance public health surveillance (PHS). Despite its potential, the technology is still in its early stages and is not ready for widespread application. Advancements in pretrained language models (PLMs) have facilitated the development of several domain-specific PLMs and a variety of downstream applications. However, there are no PLMs for social media tasks involving PHS. We present and release PHS-BERT, a transformer-based PLM, to identify tasks related to public health surveillance on social media. We compared and benchmarked the performance of PHS-BERT on 25 datasets from different social medial platforms related to 7 different PHS tasks. Compared with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT achieved state-of-the-art performance on all 25 tested datasets, showing that our PLM is robust and generalizable in the common PHS tasks. By making PHS-BERT available, we aim to facilitate the community to reduce the computational cost and introduce new baselines for future works across various PHS-related tasks.
null
null
10.18653/v1/2022.nlppower-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,511
inproceedings
harbecke-etal-2022-micro
Why only Micro-F1? Class Weighting of Measures for Relation Classification
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.4/
Harbecke, David and Chen, Yuxuan and Hennig, Leonhard and Alt, Christoph
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
32--41
Relation classification models are conventionally evaluated using only a single measure, e.g., micro-F1, macro-F1 or AUC. In this work, we analyze weighting schemes, such as micro and macro, for imbalanced datasets. We introduce a framework for weighting schemes, where existing schemes are extremes, and two new intermediate schemes. We show that reporting results of different weighting schemes better highlights strengths and weaknesses of a model.
null
null
10.18653/v1/2022.nlppower-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,512
inproceedings
keleg-etal-2022-automatically
Automatically Discarding Straplines to Improve Data Quality for Abstractive News Summarization
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.5/
Keleg, Amr and Lindemann, Matthias and Liu, Danyang and Long, Wanqiu and Webber, Bonnie L.
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
42--51
Recent improvements in automatic news summarization fundamentally rely on large corpora of news articles and their summaries. These corpora are often constructed by scraping news websites, which results in including not only summaries but also other kinds of texts. Apart from more generic noise, we identify straplines as a form of text scraped from news websites that commonly turn out not to be summaries. The presence of these non-summaries threatens the validity of scraped corpora as benchmarks for news summarization. We have annotated extracts from two news sources that form part of the Newsroom corpus (Grusky et al., 2018), labeling those which were straplines, those which were summaries, and those which were both. We present a rule-based strapline detection method that achieves good performance on a manually annotated test set. Automatic evaluation indicates that removing straplines and noise from the training data of a news summarizer results in higher quality summaries, with improvements as high as 7 points ROUGE score.
null
null
10.18653/v1/2022.nlppower-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,513
inproceedings
blagec-etal-2022-global
A global analysis of metrics used for measuring performance in natural language processing
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.6/
Blagec, Kathrin and Dorffner, Georg and Moradi, Milad and Ott, Simon and Samwald, Matthias
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
52--63
Measuring the performance of natural language processing models is challenging. Traditionally used metrics, such as BLEU and ROUGE, originally devised for machine translation and summarization, have been shown to suffer from low correlation with human judgment and a lack of transferability to other tasks and languages. In the past 15 years, a wide range of alternative metrics have been proposed. However, it is unclear to what extent this has had an impact on NLP benchmarking efforts. Here we provide the first large-scale cross-sectional analysis of metrics used for measuring performance in natural language processing. We curated, mapped and systematized more than 3500 machine learning model performance results from the open repository {\textquoteleft}Papers with Code' to enable a global and comprehensive analysis. Our results suggest that the large majority of natural language processing metrics currently used have properties that may result in an inadequate reflection of a models' performance. Furthermore, we found that ambiguities and inconsistencies in the reporting of metrics may lead to difficulties in interpreting and comparing model performances, impairing transparency and reproducibility in NLP research.
null
null
10.18653/v1/2022.nlppower-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,514
inproceedings
ahuja-etal-2022-beyond
Beyond Static models and test sets: Benchmarking the potential of pre-trained models across tasks and languages
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.7/
Ahuja, Kabir and Dandapat, Sandipan and Sitaram, Sunayana and Choudhury, Monojit
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
64--74
Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 100 languages, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages with little linguistic diversity. We argue that this makes the existing practices in multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape. We propose that the recent work done in Performance Prediction for NLP tasks can serve as a potential solution in fixing benchmarking in Multilingual NLP by utilizing features related to data and language typology to estimate the performance of an MMLM on different languages. We compare performance prediction with translating test data with a case study on four different multilingual datasets, and observe that these methods can provide reliable estimates of the performance that are often on-par with the translation based approaches, without the need for any additional translation as well as evaluation costs.
null
null
10.18653/v1/2022.nlppower-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,515
inproceedings
henrique-luz-de-araujo-roth-2022-checking
Checking {H}ate{C}heck: a cross-functional analysis of behaviour-aware learning for hate speech detection
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.8/
Henrique Luz de Araujo, Pedro and Roth, Benjamin
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
75--83
Behavioural testing{---}verifying system capabilities by validating human-designed input-output pairs{---}is an alternative evaluation method of natural language processing systems proposed to address the shortcomings of the standard approach: computing metrics on held-out data. While behavioural tests capture human prior knowledge and insights, there has been little exploration on how to leverage them for model training and development. With this in mind, we explore behaviour-aware learning by examining several fine-tuning schemes using HateCheck, a suite of functional tests for hate speech detection systems. To address potential pitfalls of training on data originally intended for evaluation, we train and evaluate models on different configurations of HateCheck by holding out categories of test cases, which enables us to estimate performance on potentially overlooked system properties. The fine-tuning procedure led to improvements in the classification accuracy of held-out functionalities and identity groups, suggesting that models can potentially generalise to overlooked functionalities. However, performance on held-out functionality classes and i.i.d. hate speech detection data decreased, which indicates that generalisation occurs mostly across functionalities from the same class and that the procedure led to overfitting to the HateCheck data distribution.
null
null
10.18653/v1/2022.nlppower-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,516
inproceedings
bianchi-etal-2022-language
Language Invariant Properties in Natural Language Processing
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.9/
Bianchi, Federico and Nozza, Debora and Hovy, Dirk
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
84--92
Meaning is context-dependent, but many properties of language (should) remain the same even if we transform the context. For example, sentiment or speaker properties should be the same in a translation and original of a text. We introduce language invariant properties: i.e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms. Language invariant properties can be used to define novel benchmarks to evaluate text transformation methods. In our work we use translation and paraphrasing as examples, but our findings apply more broadly to any transformation. Our results indicate that many NLP transformations change properties. We additionally release a tool as a proof of concept to evaluate the invariance of transformation applications.
null
null
10.18653/v1/2022.nlppower-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,517
inproceedings
eyzaguirre-etal-2022-dact
{DACT}-{BERT}: Differentiable Adaptive Computation Time for an Efficient {BERT} Inference
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.10/
Eyzaguirre, Cristobal and del Rio, Felipe and Araujo, Vladimir and Soto, Alvaro
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
93--99
Large-scale pre-trained language models have shown remarkable results in diverse NLP applications. However, these performance gains have been accompanied by a significant increase in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. This paper proposes DACT-BERT, a differentiable adaptive computation time strategy for BERT-like models. DACT-BERT adds an adaptive computational mechanism to BERT`s regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced computational regime and is competitive in other less restrictive ones. Code available at \url{https://github.com/ceyzaguirre4/dact_bert}.
null
null
10.18653/v1/2022.nlppower-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,518
inproceedings
attanasio-etal-2022-benchmarking
Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.11/
Attanasio, Giuseppe and Nozza, Debora and Pastor, Eliana and Hovy, Dirk
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
100--112
Transformer-based Natural Language Processing models have become the standard for hate speech detection. However, the unconscious use of these techniques for such a critical task comes with negative consequences. Various works have demonstrated that hate speech classifiers are biased. These findings have prompted efforts to explain classifiers, mainly using attribution methods. In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection. We cover four post-hoc token attribution approaches to explain the predictions of Transformer-based misogyny classifiers in English and Italian. Further, we compare generated attributions to attention analysis. We find that only two algorithms provide faithful explanations aligned with human expectations. Gradient-based methods and attention, however, show inconsistent outputs, making their value for explanations questionable for hate speech detection tasks.
null
null
10.18653/v1/2022.nlppower-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,519
inproceedings
ang-etal-2022-characterizing
Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context {NLP} Models
Shavrina, Tatiana and Mikhailov, Vladislav and Malykh, Valentin and Artemova, Ekaterina and Serikov, Oleg and Protasov, Vitaly
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlppower-1.12/
Ang, Phyllis and Dhingra, Bhuwan and Wu Wills, Lisa
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
113--121
With many real-world applications of Natural Language Processing (NLP) comprising of long texts, there has been a rise in NLP benchmarks that measure the accuracy of models that can handle longer input sequences. However, these benchmarks do not consider the trade-offs between accuracy, speed, and power consumption as input sizes or model sizes are varied. In this work, we perform a systematic study of this accuracy vs. efficiency trade-off on two widely used long-sequence models - Longformer-Encoder-Decoder (LED) and Big Bird - during fine-tuning and inference on four datasets from the SCROLLS benchmark. To study how this trade-off differs across hyperparameter settings, we compare the models across four sequence lengths (1024, 2048, 3072, 4096) and two model sizes (base and large) under a fixed resource budget. We find that LED consistently achieves better accuracy at lower energy costs than Big Bird. For summarization, we find that increasing model size is more energy efficient than increasing sequence length for higher accuracy. However, this comes at the cost of a large drop in inference speed. For question answering, we find that smaller models are both more efficient and more accurate due to the larger training batch sizes possible under a fixed resource budget.
null
null
10.18653/v1/2022.nlppower-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,520
inproceedings
hautli-janisz-etal-2022-disagreement
Disagreement Space in Argument Analysis
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.1/
Hautli-Janisz, Annette and Schad, Ella and Reed, Chris
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
1--9
For a highly subjective task such as recognising speaker intention and argumentation, the traditional way of generating gold standards is to aggregate a number of labels into a single one. However, this seriously neglects the underlying richness that characterises discourse and argumentation and is also, in some cases, straightforwardly impossible. In this paper, we present QT30nonaggr, the first corpus of non-aggregated argument annotation, which will be openly available upon publication. QT30nonaggr encompasses 10{\%} of QT30, the largest corpus of dialogical argumentation and analysed broadcast political debate currently available with 30 episodes of BBC`s {\textquoteleft}Question Time' from 2020 and 2021. Based on a systematic and detailed investigation of annotation judgements across all steps of the annotation process, we structure the disagreement space with a taxonomy of the types of label disagreements in argument annotation, identifying the categories of annotation errors, fuzziness and ambiguity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,522
inproceedings
biester-etal-2022-analyzing
Analyzing the Effects of Annotator Gender across {NLP} Tasks
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.2/
Biester, Laura and Sharma, Vanita and Kazemi, Ashkan and Deng, Naihao and Wilson, Steven and Mihalcea, Rada
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
10--19
Recent studies have shown that for subjective annotation tasks, the demographics, lived experiences, and identity of annotators can have a large impact on how items are labeled. We expand on this work, hypothesizing that gender may correlate with differences in annotations for a number of NLP benchmarks, including those that are fairly subjective (e.g., affect in text) and those that are typically considered to be objective (e.g., natural language inference). We develop a robust framework to test for differences in annotation across genders for four benchmark datasets. While our results largely show a lack of statistically significant differences in annotation by males and females for these tasks, the framework can be used to analyze differences in annotation between various other demographic groups in future work. Finally, we note that most datasets are collected without annotator demographics and released only in aggregate form; we call on the community to consider annotator demographics as data is collected, and to release dis-aggregated data to allow for further work analyzing variability among annotators.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,523
inproceedings
bizzoni-etal-2022-predicting
Predicting Literary Quality How Perspectivist Should We Be?
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.3/
Bizzoni, Yuri and Lassen, Ida Marie and Peura, Telma and Thomsen, Mads Rosendahl and Nielbo, Kristoffer
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
20--25
Approaches in literary quality tend to belong to two main grounds: one sees quality as completely subjective, relying on the idiosyncratic nature of individual perspectives on the apperception of beauty; the other is ground-truth inspired, and attempts to find one or two values that predict something like an objective quality: the number of copies sold, for example, or the winning of a prestigious prize. While the first school usually does not try to predict quality at all, the second relies on a single majority vote in one form or another. In this article we discuss the advantages and limitations of these schools of thought and describe a different approach to reader`s quality judgments, which moves away from raw majority vote, but does try to create intermediate classes or groups of annotators. Drawing on previous works we describe the benefits and drawbacks of building similar annotation classes. Finally we share early results from a large corpus of literary reviews for an insight into which classes of readers might make most sense when dealing with the appreciation of literary quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,524
inproceedings
marchiori-manerba-etal-2022-bias
Bias Discovery within Human Raters: A Case Study of the Jigsaw Dataset
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.4/
Marchiori Manerba, Marta and Guidotti, Riccardo and Passaro, Lucia and Ruggieri, Salvatore
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
26--31
Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning. Recently, a perspectivist trend has emerged in the NLP community, focusing on the inadequacy of previous aggregation schemes, which suppose the existence of single ground truth. This assumption is particularly problematic for sensitive tasks involving subjective human judgments, such as toxicity detection. To address these issues, we propose a preliminary approach for bias discovery within human raters by exploring individual ratings for specific sensitive topics annotated in the texts. Our analysis`s object consists of the Jigsaw dataset, a collection of comments aiming at challenging online toxicity identification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,525
inproceedings
glenn-etal-2022-viability
The Viability of Best-worst Scaling and Categorical Data Label Annotation Tasks in Detecting Implicit Bias
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.5/
Glenn, Parker and Jacobs, Cassandra L. and Thielk, Marvin and Chu, Yi
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
32--36
Annotating workplace bias in text is a noisy and subjective task. In encoding the inherently continuous nature of bias, aggregated binary classifications do not suffice. Best-worst scaling (BWS) offers a framework to obtain real-valued scores through a series of comparative evaluations, but it is often impractical to deploy to traditional annotation pipelines within industry. We present analyses of a small-scale bias dataset, jointly annotated with categorical annotations and BWS annotations. We show that there is a strong correlation between observed agreement and BWS score (Spearman`s r=0.72). We identify several shortcomings of BWS relative to traditional categorical annotation: (1) When compared to categorical annotation, we estimate BWS takes approximately 4.5x longer to complete; (2) BWS does not scale well to large annotation tasks with sparse target phenomena; (3) The high correlation between BWS and the traditional task shows that the benefits of BWS can be recovered from a simple categorically annotated, non-aggregated dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,526
inproceedings
kanclerz-etal-2022-ground
What If Ground Truth Is Subjective? Personalized Deep Neural Hate Speech Detection
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.6/
Kanclerz, Kamil and Gruza, Marcin and Karanowski, Konrad and Bielaniewicz, Julita and Milkowski, Piotr and Kocon, Jan and Kazienko, Przemyslaw
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
37--45
A unified gold standard commonly exploited in natural language processing (NLP) tasks requires high inter-annotator agreement. However, there are many subjective problems that should respect users individual points of view. Therefore in this paper, we evaluate three different personalized methods on the task of hate speech detection. The user-centered techniques are compared to the generalizing baseline approach. We conduct our experiments on three datasets including single-task and multi-task hate speech detection. For validation purposes, we introduce a new data-split strategy, preventing data leakage between training and testing. In order to better understand the model behavior for individual users, we carried out personalized ablation studies. Our experiments revealed that all models leveraging user preferences in any case provide significantly better results than most frequently used generalized approaches. This supports our overall observation that personalized models should always be considered in all subjective NLP tasks, including hate speech detection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,527
inproceedings
ngo-etal-2022-studemo
{S}tud{E}mo: A Non-aggregated Review Dataset for Personalized Emotion Recognition
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.7/
Ngo, Anh and Candri, Agri and Ferdinan, Teddy and Kocon, Jan and Korczynski, Wojciech
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
46--55
Humans' emotional perception is subjective by nature, in which each individual could express different emotions regarding the same textual content. Existing datasets for emotion analysis commonly depend on a single ground truth per data sample, derived from majority voting or averaging the opinions of all annotators. In this paper, we introduce a new non-aggregated dataset, namely StudEmo, that contains 5,182 customer reviews, each annotated by 25 people with intensities of eight emotions from Plutchik`s model, extended with valence and arousal. We also propose three personalized models that use not only textual content but also the individual human perspective, providing the model with different approaches to learning human representations. The experiments were carried out as a multitask classification on two datasets: our StudEmo dataset and GoEmotions dataset, which contains 28 emotional categories. The proposed personalized methods significantly improve prediction results, especially for emotions that have low inter-annotator agreement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,528
inproceedings
homan-etal-2022-annotator
Annotator Response Distributions as a Sampling Frame
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.8/
Homan, Christopher and Weerasooriya, Tharindu Cyril and Aroyo, Lora and Welty, Chris
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
56--65
Annotator disagreement is often dismissed as noise or the result of poor annotation process quality. Others have argued that it can be meaningful. But lacking a rigorous statistical foundation, the analysis of disagreement patterns can resemble a high-tech form of tea-leaf-reading. We contribute a framework for analyzing the variation of per-item annotator response distributions to data for humans-in-the-loop machine learning. We provide visualizations for, and use the framework to analyze the variance in, a crowdsourced dataset of hard-to-classify examples from the OpenImages archive.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,529
inproceedings
labat-etal-2022-variation
Variation in the Expression and Annotation of Emotions: A {W}izard of {O}z Pilot Study
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.9/
Labat, Sofie and Ackaert, Naomi and Demeester, Thomas and Hoste, Veronique
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
66--72
This pilot study employs the Wizard of Oz technique to collect a corpus of written human-computer conversations in the domain of customer service. The resulting dataset contains 192 conversations and is used to test three hypotheses related to the expression and annotation of emotions. First, we hypothesize that there is a discrepancy between the emotion annotations of the participant (the experiencer) and the annotations of our external annotator (the observer). Furthermore, we hypothesize that the personality of the participants has an influence on the emotions they expressed, and on the way they evaluated (annotated) these emotions. We found that for an external, trained annotator, not all emotion labels were equally easy to work with. We also noticed that the trained annotator had a tendency to opt for emotion labels that were more centered in the valence-arousal space, while participants made more {\textquoteleft}extreme' annotations. For the second hypothesis, we discovered a positive correlation between the personality trait extraversion and the emotion dimensions valence and dominance in our sample. Finally, for the third premise, we observed a positive correlation between the internal-external agreement on emotion labels and the personality traits conscientiousness and extraversion. Our insights and findings will be used in future research to conduct a larger Wizard of Oz experiment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,530
inproceedings
havens-etal-2022-beyond
Beyond Explanation: A Case for Exploratory Text Visualizations of Non-Aggregated, Annotated Datasets
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.10/
Havens, Lucy and Bach, Benjamin and Terras, Melissa and Alex, Beatrice
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
73--82
This paper presents an overview of text visualization techniques relevant for data perspectivism, aiming to facilitate analysis of annotated datasets for the datasets' creators and stakeholders. Data perspectivism advocates for publishing non-aggregated, annotated text data, recognizing that for highly subjective tasks, such as bias detection and hate speech detection, disagreements among annotators may indicate conflicting yet equally valid interpretations of a text. While the publication of non-aggregated, annotated data makes different interpretations of text corpora available, barriers still exist to investigating patterns and outliers in annotations of the text. Techniques from text visualization can overcome these barriers, facilitating intuitive data analysis for NLP researchers and practitioners, as well as stakeholders in NLP systems, who may not have data science or computing skills. In this paper we discuss challenges with current dataset creation practices and annotation platforms, followed by a discussion of text visualization techniques that enable open-ended, multi-faceted, and iterative analysis of annotated data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,531
inproceedings
sachdeva-etal-2022-measuring
The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.11/
Sachdeva, Pratik and Barreto, Renata and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia and Kennedy, Chris
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
83--94
We introduce the Measuring Hate Speech corpus, a dataset created to measure hate speech while adjusting for annotators' perspectives. It consists of 50,070 social media comments spanning YouTube, Reddit, and Twitter, labeled by 11,143 annotators recruited from Amazon Mechanical Turk. Each observation includes 10 ordinal labels: sentiment, disrespect, insult, attacking/defending, humiliation, inferior/superior status, dehumanization, violence, genocide, and a 3-valued hate speech benchmark label. The labels are aggregated using faceted Rasch measurement theory (RMT) into a continuous score that measures each comment`s location on a hate speech spectrum. The annotation experimental design assigned comments to multiple annotators in order to yield a linked network, allowing annotator disagreement (perspective) to be statistically summarized. Annotators' labeling strictness was estimated during the RMT scaling, projecting their perspective onto a linear measure that was adjusted for the hate speech score. Models that incorporate this annotator perspective parameter as an auxiliary input can generate label- and score-level predictions conditional on annotator perspective. The corpus includes the identity group targets of each comment (8 groups, 42 subgroups) and annotator demographics (6 groups, 40 subgroups), facilitating analyses of interactions between annotator- and comment-level identities, i.e. identity-related annotator perspective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,532
inproceedings
weerasooriya-etal-2022-improving
Improving Label Quality by Jointly Modeling Items and Annotators
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.12/
Weerasooriya, Tharindu Cyril and Ororbia, Alexander and Homan, Christopher
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
95--99
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators. Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model. Earlier research along these lines has neither fully incorporated label distributions nor explored clustering by annotators only or data only. Our framework incorporates all of these properties within a graphical model designed to provide better ground truth estimates of annotator responses as input to any black box supervised learning algorithm. We conduct supervised learning experiments with variations of our models and compare them to the performance of several baseline models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,533
inproceedings
timponi-torrent-etal-2022-lutma
Lutma: A Frame-Making Tool for Collaborative {F}rame{N}et Development
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.13/
Timponi Torrent, Tiago and Lorenzi, Arthur and Matos, Ely Edison and Belcavello, Frederico and Viridiano, Marcelo and Andrade Gamonal, Maucha
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
100--107
This paper presents Lutma, a collaborative, semi-constrained, tutorial-based tool for contributing frames and lexical units to the Global FrameNet initiative. The tool parameterizes the process of frame creation, avoiding consistency violations and promoting the integration of frames contributed by the community with existing frames. Lutma is structured in a wizard-like fashion so as to provide users with text and video tutorials relevant for each step in the frame creation process. We argue that this tool will allow for a sensible expansion of FrameNet coverage in terms of both languages and cultural perspectives encoded by them, positioning frames as a viable alternative for representing perspective in language models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,534
inproceedings
viridiano-etal-2022-case
The Case for Perspective in Multimodal Datasets
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.14/
Viridiano, Marcelo and Timponi Torrent, Tiago and Czulo, Oliver and Lorenzi, Arthur and Matos, Ely and Belcavello, Frederico
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
108--116
This paper argues in favor of the adoption of annotation practices for multimodal datasets that recognize and represent the inherently perspectivized nature of multimodal communication. To support our claim, we present a set of annotation experiments in which FrameNet annotation is applied to the Multi30k and the Flickr 30k Entities datasets. We assess the cosine similarity between the semantic representations derived from the annotation of both pictures and captions for frames. Our findings indicate that: (i) frame semantic similarity between captions of the same picture produced in different languages is sensitive to whether the caption is a translation of another caption or not, and (ii) picture annotation for semantic frames is sensitive to whether the image is annotated in presence of a caption or not.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,535
inproceedings
mastromattei-etal-2022-change
Change My Mind: How Syntax-based Hate Speech Recognizer Can Uncover Hidden Motivations Based on Different Viewpoints
Abercrombie, Gavin and Basile, Valerio and Tonelli, Sara and Rieser, Verena and Uma, Alexandra
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.nlperspectives-1.15/
Mastromattei, Michele and Basile, Valerio and Zanzotto, Fabio Massimo
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
117--125
Hate speech recognizers may mislabel sentences by not considering the different opinions that society has on selected topics. In this paper, we show how explainable machine learning models based on syntax can help to understand the motivations that induce a sentence to be offensive to a certain demographic group. By comparing and contrasting the results, we show the key points that make a sentence labeled as hate speech and how this varies across different ethnic groups.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,536
inproceedings
zanwar-etal-2022-improving
Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.1/
Zanwar, Sourabh and Wiechmann, Daniel and Qiao, Yu and Kerz, Elma
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
1--13
recent years, there has been increased interest in building predictive models that harness natural language processing and machine learning techniques to detect emotions from various text sources, including social media posts, micro-blogs or news articles. Yet, deployment of such models in real-world sentiment and emotion applications faces challenges, in particular poor out-of-domain generalizability. This is likely due to domain-specific differences (e.g., topics, communicative goals, and annotation schemes) that make transfer between different models of emotion recognition difficult. In this work we propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features. First, we evaluate the performance of our models within-domain on two benchmark datasets GoEmotion (Demszky et al., 2020) and ISEAR (Scherer and Wallbott, 1994). Second, we conduct transfer learning experiments on six datasets from the Unified Emotion Dataset (Bostan and Klinger, 2018) to evaluate their out-of-domain robustness. We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach. Moreover, we observe that these models perform competitively on in-domain data.'
null
null
10.18653/v1/2022.nlpcss-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,538
inproceedings
gnehm-etal-2022-fine
Fine-Grained Extraction and Classification of Skill Requirements in {G}erman-Speaking Job Ads
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.2/
Gnehm, Ann-sophie and B{\"uhlmann, Eva and Buchs, Helen and Clematide, Simon
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
14--24
Monitoring the development of labor market skill requirements is an information need that is more and more approached by applying text mining methods to job advertisement data. We present an approach for fine-grained extraction and classification of skill requirements from German-speaking job advertisements. We adapt pre-trained transformer-based language models to the domain and task of computing meaningful representations of sentences or spans. By using context from job advertisements and the large ESCO domain ontology we improve our similarity-based unsupervised multi-label classification results. Our best model achieves a mean average precision of 0.969 on the skill class level.
null
null
10.18653/v1/2022.nlpcss-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,539
inproceedings
wegge-etal-2022-experiencer
Experiencer-Specific Emotion and Appraisal Prediction
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.3/
Wegge, Maximilian and Troiano, Enrica and Oberlaender, Laura Ana Maria and Klinger, Roman
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
25--32
Emotion classification in NLP assigns emotions to texts, such as sentences or paragraphs. With texts like {\textquotedblleft}I felt guilty when he cried{\textquotedblright}, focusing on the sentence level disregards the standpoint of each participant in the situation: the writer ({\textquotedblleft}I{\textquotedblright}) and the other entity ({\textquotedblleft}he{\textquotedblright}) could in fact have different affective states. The emotions of different entities have been considered only partially in emotion semantic role labeling, a task that relates semantic roles to emotion cue words. Proposing a related task, we narrow the focus on the experiencers of events, and assign an emotion (if any holds) to each of them. To this end, we represent each emotion both categorically and with appraisal variables, as a psychological access to explaining why a person develops a particular emotion. On an event description corpus, our experiencer-aware models of emotions and appraisals outperform the experiencer-agnostic baselines, showing that disregarding event participants is an oversimplification for the emotion detection task.
null
null
10.18653/v1/2022.nlpcss-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,540
inproceedings
xu-etal-2022-understanding-narratives
Understanding Narratives from Demographic Survey Data: a Comparative Study with Multiple Neural Topic Models
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.4/
Xu, Xiao and Stulp, Gert and Van Den Bosch, Antal and Gauthier, Anne
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
33--38
Fertility intentions as verbalized in surveys are a poor predictor of actual fertility outcomes, the number of children people have. This can partly be explained by the uncertainty people have in their intentions. Such uncertainties are hard to capture through traditional survey questions, although open-ended questions can be used to get insight into people`s subjective narratives of the future that determine their intentions. Analyzing such answers to open-ended questions can be done through Natural Language Processing techniques. Traditional topic models (e.g., LSA and LDA), however, often fail to do since they rely on co-occurrences, which are often rare in short survey responses. The aim of this study was to apply and evaluate topic models on demographic survey data. In this study, we applied neural topic models (e.g. BERTopic, CombinedTM) based on language models to responses from Dutch women on their fertility plans, and compared the topics and their coherence scores from each model to expert judgments. Our results show that neural models produce topics more in line with human interpretation compared to LDA. However, the coherence score could only partly reflect on this, depending on the corpus used for calculation. This research is important because, first, it helps us develop more informed strategies on model selection and evaluation for topic modeling on survey data; and second, it shows that the field of demography has much to gain from adopting NLP methods.
null
null
10.18653/v1/2022.nlpcss-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,541
inproceedings
stahl-etal-2022-prefer
To Prefer or to Choose? Generating Agency and Power Counterfactuals Jointly for Gender Bias Mitigation
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.6/
Stahl, Maja and Splieth{\"over, Maximilian and Wachsmuth, Henning
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
39--51
Gender bias may emerge from an unequal representation of agency and power, for example, by portraying women frequently as passive and powerless ({\textquotedblleft}She accepted her future{\textquotedblright}) and men as proactive and powerful ({\textquotedblleft}He chose his future{\textquotedblright}). When language models learn from respective texts, they may reproduce or even amplify the bias. An effective way to mitigate bias is to generate counterfactual sentences with opposite agency and power to the training. Recent work targeted agency-specific verbs from a lexicon to this end. We argue that this is insufficient, due to the interaction of agency and power and their dependence on context. In this paper, we thus develop a new rewriting model that identifies verbs with the desired agency and power in the context of the given sentence. The verbs' probability is then boosted to encourage the model to rewrite both connotations jointly. According to automatic metrics, our model effectively controls for power while being competitive in agency to the state of the art. In our main evaluation, human annotators favored its counterfactuals in terms of both connotations, also deeming its meaning preservation better.
null
null
10.18653/v1/2022.nlpcss-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,542
inproceedings
weigand-etal-2022-conspiracy
Conspiracy Narratives in the Protest Movement Against {COVID}-19 Restrictions in {G}ermany. A Long-term Content Analysis of Telegram Chat Groups.
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.8/
Weigand, Manuel and Weber, Maximilian and Gruber, Johannes
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
52--58
From the start of the COVID-19 pandemic in Germany, different groups have been protesting measures implemented by different government bodies in Germany to control the pandemic. It was widely claimed that many of the offline and online protests were driven by conspiracy narratives disseminated through groups and channels on the messenger app Telegram. We investigate this claim by measuring the frequency of conspiracy narratives in messages from open Telegram chat groups of the Querdenken movement, set up to organize protests against COVID-19 restrictions in Germany. We furthermore explore the content of these messages using topic modelling. To this end, we collected 822k text messages sent between April 2020 and May 2022 in 34 chat groups. By fine-tuning a Distilbert model, using self-annotated data, we find that 8.24{\%} of the sent messages contain signs of conspiracy narratives. This number is not static, however, as the share of conspiracy messages grew while the overall number of messages shows a downward trend since its peak at the end of 2020. We further find a mix of known conspiracy narratives make up the topics in our topic model. Our findings suggest that the Querdenken movement is getting smaller over time, but its remaining members focus even more on conspiracy narratives.
null
null
10.18653/v1/2022.nlpcss-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,543
inproceedings
noble-bernardy-2022-conditional
Conditional Language Models for Community-Level Linguistic Variation
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.9/
Noble, Bill and Bernardy, Jean-philippe
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
59--78
Community-level linguistic variation is a core concept in sociolinguistics. In this paper, we use conditioned neural language models to learn vector representations for 510 online communities. We use these representations to measure linguistic variation between commu-nities and investigate the degree to which linguistic variation corresponds with social connections between communities. We find that our sociolinguistic embeddings are highly correlated with a social network-based representation that does not use any linguistic input.
null
null
10.18653/v1/2022.nlpcss-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,544
inproceedings
welch-etal-2022-understanding
Understanding Interpersonal Conflict Types and their Impact on Perception Classification
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.10/
Welch, Charles and Plepi, Joan and Neuendorf, B{\'e}la and Flek, Lucie
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
79--88
Studies on interpersonal conflict have a long history and contain many suggestions for conflict typology. We use this as the basis of a novel annotation scheme and release a new dataset of situations and conflict aspect annotations. We then build a classifier to predict whether someone will perceive the actions of one individual as right or wrong in a given situation. Our analyses include conflict aspects, but also generated clusters, which are human validated, and show differences in conflict content based on the relationship of participants to the author. Our findings have important implications for understanding conflict and social norms.
null
null
10.18653/v1/2022.nlpcss-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,545
inproceedings
gupta-etal-2022-examining
Examining Political Rhetoric with Epistemic Stance Detection
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.11/
Gupta, Ankita and Blodgett, Su Lin and Gross, Justin H and O{'}Connor, Brendan
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
89--104
Participants in political discourse employ rhetorical strategies{---}such as hedging, attributions, or denials{---}to display varying degrees of belief commitments to claims proposed by themselves or others. Traditionally, political scientists have studied these epistemic phenomena through labor-intensive manual content analysis. We propose to help automate such work through epistemic stance prediction, drawn from research in computational semantics, to distinguish at the clausal level what is asserted, denied, or only ambivalently suggested by the author or other mentioned entities (belief holders). We first develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling. Then we demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books, where we characterize trends in cited belief holders{---}respected allies and opposed bogeymen{---}across U.S. political ideologies.
null
null
10.18653/v1/2022.nlpcss-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,546
inproceedings
singh-rios-2022-linguistic
Linguistic Elements of Engaging Customer Service Discourse on Social Media
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.12/
Singh, Sonam and Rios, Anthony
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
105--117
Customers are rapidly turning to social media for customer support. While brand agents on these platforms are motivated and well-intentioned to help and engage with customers, their efforts are often ignored if their initial response to the customer does not match a specific tone, style, or topic the customer is aiming to receive. The length of a conversation can reflect the effort and quality of the initial response made by a brand toward collaborating and helping consumers, even when the overall sentiment of the conversation might not be very positive. Thus, through this study, we aim to bridge this critical gap in the existing literature by analyzing language`s content and stylistic aspects such as expressed empathy, psycho-linguistic features, dialogue tags, and metrics for quantifying personalization of the utterances that can influence the engagement of an interaction. This paper demonstrates that we can predict engagement using initial customer and brand posts.
null
null
10.18653/v1/2022.nlpcss-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,547
inproceedings
touileb-nozza-2022-measuring
Measuring Harmful Representations in {S}candinavian Language Models
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.13/
Touileb, Samia and Nozza, Debora
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
118--125
Scandinavian countries are perceived as role-models when it comes to gender equality. With the advent of pre-trained language models and their widespread usage, we investigate to what extent gender-based harmful and toxic content exists in selected Scandinavian language models. We examine nine models, covering Danish, Swedish, and Norwegian, by manually creating template-based sentences and probing the models for completion. We evaluate the completions using two methods for measuring harmful and toxic completions and provide a thorough analysis of the results. We show that Scandinavian pre-trained language models contain harmful and gender-based stereotypes with similar values across all languages. This finding goes against the general expectations related to gender equality in Scandinavian countries and shows the possible problematic outcomes of using such models in real-world settings. Warning: Some of the examples provided in this paper can be upsetting and offensive.
null
null
10.18653/v1/2022.nlpcss-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,548
inproceedings
breitwieser-2022-contextualizing
Can Contextualizing User Embeddings Improve Sarcasm and Hate Speech Detection?
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.14/
Breitwieser, Kim
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
126--139
While implicit embeddings so far have been mostly concerned with creating an overall representation of the user, we evaluate a different approach. By only considering content directed at a specific topic, we create sub-user embeddings, and measure their usefulness on the tasks of sarcasm and hate speech detection. In doing so, we show that task-related topics can have a noticeable effect on model performance, especially when dealing with intended expressions like sarcasm, but less so for hate speech, which is usually labelled as such on the receiving end.
null
null
10.18653/v1/2022.nlpcss-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,549
inproceedings
yang-etal-2022-professional
Professional Presentation and Projected Power: A Case Study of Implicit Gender Information in {E}nglish {CV}s
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.15/
Yang, Jinrui and Njoto, Sheilla and Cheong, Marc and Ruppanner, Leah and Frermann, Lea
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
140--150
Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motivating example for exploring bias in NLP. However, the manifestation of gendered language in application materials has received limited attention. This paper investigates the framing of skills and background in CVs of self-identified men and women. We introduce a data set of 1.8K authentic, English-language, CVs from the US, covering 16 occupations, allowing us to partially control for the confound occupation-specific gender base rates. We find that (1) women use more verbs evoking impressions of low power; and (2) classifiers capture gender signal even after data balancing and removal of pronouns and named entities, and this holds for both transformer-based and linear classifiers.
null
null
10.18653/v1/2022.nlpcss-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,550
inproceedings
varadarajan-etal-2022-detecting
Detecting Dissonant Stance in Social Media: The Role of Topic Exposure
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.16/
Varadarajan, Vasudha and Soni, Nikita and Wang, Weixi and Luhmann, Christian and Schwartz, H. Andrew and Inoue, Naoya
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
151--156
We address dissonant stance detection, classifying conflicting stance between two input statements. Computational models for traditional stance detection have typically been trained to indicate pro/con for a given target topic (e.g. gun control) and thus do not generalize well to new topics. In this paper, we systematically evaluate the generalizability of dissonant stance detection to situations where examples of the topic have not been seen at all or have only been seen a few times. We show that dissonant stance detection models trained on only 8 topics, none of which are the target topic, can perform as well as those trained only on a target topic. Further, adding non-target topics boosts performance further up to approximately 32 topics where accuracies start to plateau. Taken together, our experiments suggest dissonant stance detection models can generalize to new unanticipated topics, an important attribute for the social scientific study of social media where new topics emerge daily.
null
null
10.18653/v1/2022.nlpcss-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,551
inproceedings
wu-2022-analysis
An Analysis of Acknowledgments in {NLP} Conference Proceedings
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.17/
Wu, Winston
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
157--163
While acknowledgments are often overlooked and sometimes entirely missing from publications, this short section of a paper can provide insights on the state of a field. We characterize and perform a textual analysis of acknowledgments in NLP conference proceedings across the last 17 years, revealing broader trends in funding and research directions in NLP as well as interesting phenomena including career incentives and the influence of defaults.
null
null
10.18653/v1/2022.nlpcss-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,552
inproceedings
kantharaju-schmer-galunder-2022-extracting
Extracting Associations of Intersectional Identities with Discourse about Institution from {N}igeria
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.18/
Kantharaju, Pavan and Schmer-galunder, Sonja
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
164--169
Word embedding models have been used in prior work to extract associations of intersectional identities within discourse concerning institutions of power, but restricted its focus on narratives of the nineteenth-century U.S. south. This paper leverages this prior work and introduces an initial study on the association of intersected identities with discourse concerning social institutions within social media from Nigeria. Specifically, we use word embedding models trained on tweets from Nigeria and extract associations of intersected social identities with institutions (e.g., domestic, culture, etc.) to provide insight into the alignment of identities with institutions. Our initial experiments indicate that identities at the intersection of gender and economic status groups have significant associations with discourse about the economic, political, and domestic institutions.
null
null
10.18653/v1/2022.nlpcss-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,553
inproceedings
shen-etal-2022-olala
{OLALA}: Object-Level Active Learning for Efficient Document Layout Annotation
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.19/
Shen, Zejiang and Li, Weining and Zhao, Jian and Yu, Yaoliang and Dell, Melissa
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
170--182
Layout detection is an essential step for accurately extracting structured contents from historical documents. The intricate and varied layouts present in these document images make it expensive to label the numerous layout regions that can be densely arranged on each page. Current active learning methods typically rank and label samples at the image level, where the annotation budget is not optimally spent due to the overexposure of common objects per image. Inspired by recent progress in semi-supervised learning and self-training, we propose OLALA, an Object-Level Active Learning framework for efficient document layout Annotation. OLALA aims to optimize the annotation process by selectively annotating only the most ambiguous regions within an image, while using automatically generated labels for the rest. Central to OLALA is a perturbation-based scoring function that determines which objects require manual annotation. Extensive experiments show that OLALA can significantly boost model performance and improve annotation efficiency, facilitating the extraction of masses of structured text for downstream NLP applications.
null
null
10.18653/v1/2022.nlpcss-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,554
inproceedings
roy-etal-2022-towards
Towards Few-Shot Identification of Morality Frames using In-Context Learning
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.20/
Roy, Shamik and Nakshatri, Nishanth Sridhar and Goldwasser, Dan
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
183--196
Data scarcity is a common problem in NLP, especially when the annotation pertains to nuanced socio-linguistic concepts that require specialized knowledge. As a result, few-shot identification of these concepts is desirable. Few-shot in-context learning using pre-trained Large Language Models (LLMs) has been recently applied successfully in many NLP tasks. In this paper, we study few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et al., 2021), using LLMs. Morality frames are a representation framework that provides a holistic view of the moral sentiment expressed in text, identifying the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of granularity, the moral sentiment expressed towards the entities mentioned in the text. Previous studies relied on human annotation to identify morality frames in text which is expensive. In this paper, we propose prompting based approaches using pretrained Large Language Models for identification of morality frames, relying only on few-shot exemplars. We compare our models' performance with few-shot RoBERTa and found promising results.
null
null
10.18653/v1/2022.nlpcss-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,555
inproceedings
painter-etal-2022-utilizing
Utilizing Weak Supervision to Create {S}3{D}: A Sarcasm Annotated Dataset
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.22/
Painter, Jordan and Treharne, Helen and Kanojia, Diptesh
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
197--206
Sarcasm is prevalent in all corners of social media, posing many challenges within Natural Language Processing (NLP), particularly for sentiment analysis. Sarcasm detection remains a largely unsolved problem in many NLP tasks due to its contradictory and typically derogatory nature as a figurative language construct. With recent strides in NLP, many pre-trained language models exist that have been trained on data from specific social media platforms, i.e., Twitter. In this paper, we evaluate the efficacy of multiple sarcasm detection datasets using machine and deep learning models. We create two new datasets - a manually annotated gold standard Sarcasm Annotated Dataset (SAD) and a Silver-Standard Sarcasm-annotated Dataset (S3D). Using a combination of existing sarcasm datasets with SAD, we train a sarcasm detection model over a social-media domain pre-trained language model, BERTweet, which yields an F1-score of 78.29{\%}. Using an Ensemble model with an underlying majority technique, we further label S3D to produce a weakly supervised dataset containing over 100,000 tweets. We publicly release all the code, our manually annotated and weakly supervised datasets, and fine-tuned models for further research.
null
null
10.18653/v1/2022.nlpcss-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,556
inproceedings
ungless-etal-2022-robust
A Robust Bias Mitigation Procedure Based on the Stereotype Content Model
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.23/
Ungless, Eddie and Rafferty, Amy and Nag, Hrichika and Ross, Bj{\"orn
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
207--217
The Stereotype Content model (SCM) states that we tend to perceive minority groups as cold, incompetent or both. In this paper we adapt existing work to demonstrate that the Stereotype Content model holds for contextualised word embeddings, then use these results to evaluate a fine-tuning process designed to drive a language model away from stereotyped portrayals of minority groups. We find the SCM terms are better able to capture bias than demographic agnostic terms related to pleasantness. Further, we were able to reduce the presence of stereotypes in the model through a simple fine-tuning procedure that required minimal human and computer resources, without harming downstream performance. We present this work as a prototype of a debiasing procedure that aims to remove the need for a priori knowledge of the specifics of bias in the model.
null
null
10.18653/v1/2022.nlpcss-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,557
inproceedings
miotto-etal-2022-gpt
Who is {GPT}-3? An exploration of personality, values and demographics
Bamman, David and Hovy, Dirk and Jurgens, David and Keith, Katherine and O'Connor, Brendan and Volkova, Svitlana
nov
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.nlpcss-1.24/
Miotto, Maril{\`u} and Rossberg, Nicola and Kleinberg, Bennett
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)
218--227
Language models such as GPT-3 have caused a furore in the research community. Some studies found that GPT-3 has some creative abilities and makes mistakes that are on par with human behaviour. This paper answers a related question: Who is GPT-3? We administered two validated measurement tools to GPT-3 to assess its personality, the values it holds and its self-reported demographics. Our results show that GPT-3 scores similarly to human samples in terms of personality and - when provided with a model response memory - in terms of the values it holds. We provide the first evidence of psychological assessment of the GPT-3 model and thereby add to our understanding of this language model. We close with suggestions for future research that moves social science closer to language models and vice versa.
null
null
10.18653/v1/2022.nlpcss-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,558
inproceedings
chua-etal-2022-unified
A unified framework for cross-domain and cross-task learning of mental health conditions
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.1/
Chua, Huikai and Caines, Andrew and Yannakoudakis, Helen
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
1--14
The detection of mental health conditions based on an individual`s use of language has received considerable attention in the NLP community. However, most work has focused on single-task and single-domain models, limiting the semantic space that they are able to cover and risking significant cross-domain loss. In this paper, we present two approaches towards a unified framework for cross-domain and cross-task learning for the detection of depression, post-traumatic stress disorder and suicide risk across different platforms that further utilizes inductive biases across tasks. Firstly, we develop a lightweight model using a general set of features that sets a new state of the art on several tasks while matching the performance of more complex task- and domain-specific systems on others. We also propose a multi-task approach and further extend our framework to explicitly capture the affective characteristics of someone`s language, further consolidating transfer of inductive biases and of shared linguistic characteristics. Finally, we present a novel dynamically adaptive loss weighting approach that allows for more stable learning across imbalanced datasets and better neural generalization performance. Our results demonstrate the effectiveness of our unified framework for mental ill-health detection across a number of diverse English datasets.
null
null
10.18653/v1/2022.nlp4pi-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,560
inproceedings
rosenblatt-etal-2022-critical
Critical Perspectives: A Benchmark Revealing Pitfalls in {P}erspective{API}
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.2/
Rosenblatt, Lucas and Piedras, Lorena and Wilkins, Julia
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
15--24
Detecting {\textquotedblleft}toxic{\textquotedblright} language in internet content is a pressing social and technical challenge. In this work, we focus on Perspective API from Jigsaw, a state-of-the-art tool that promises to score the {\textquotedblleft}toxicity{\textquotedblright} of text, with a recent model update that claims impressive results (Lees et al., 2022). We seek to challenge certain normative claims about toxic language by proposing a new benchmark, Selected Adversarial SemanticS, or SASS. We evaluate Perspective on SASS, and compare to low-effort alternatives, like zero-shot and few-shot GPT-3 prompt models, in binary classification settings. We find that Perspective exhibits troubling shortcomings across a number of our toxicity categories. SASS provides a new tool for evaluating performance on previously undetected toxic language that avoids common normative pitfalls. Our work leads us to emphasize the importance of questioning assumptions made by tools already in deployment for toxicity detection in order to anticipate and prevent disparate harms.
null
null
10.18653/v1/2022.nlp4pi-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,561
inproceedings
addlesee-2022-securely
Securely Capturing People`s Interactions with Voice Assistants at Home: A Bespoke Tool for Ethical Data Collection
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.3/
Addlesee, Angus
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
25--30
Speech production is nuanced and unique to every individual, but today`s Spoken Dialogue Systems (SDSs) are trained to use general speech patterns to successfully improve performance on various evaluation metrics. However, these patterns do not apply to certain user groups - often the very people that can benefit the most from SDSs. For example, people with dementia produce more disfluent speech than the general population. In order to evaluate systems with specific user groups in mind, and to guide the design of such systems to deliver maximum benefit to these users, data must be collected securely. In this short paper we present CVR-SI, a bespoke tool for ethical data collection. Designed for the healthcare domain, we argue that it should also be used in more general settings. We detail how off-the-shelf solutions fail to ensure that sensitive data remains secure and private. We then describe the ethical design and security features of our device, with a full guide on how to build both the hardware and software components of CVR-SI. Our design ensures inclusivity to all researchers in this field, particularly those who are not hardware experts. This guarantees everyone can collect appropriate data for human evaluation ethically, securely, and in a timely manner.
null
null
10.18653/v1/2022.nlp4pi-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,562
inproceedings
lin-2022-leveraging
Leveraging World Knowledge in Implicit Hate Speech Detection
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.4/
Lin, Jessica
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
31--39
While much attention has been paid to identifying explicit hate speech, implicit hateful expressions that are disguised in coded or indirect language are pervasive and remain a major challenge for existing hate speech detection systems. This paper presents the first attempt to apply Entity Linking (EL) techniques to both explicit and implicit hate speech detection, where we show that such real world knowledge about entity mentions in a text does help models better detect hate speech, and the benefit of adding it into the model is more pronounced when explicit entity triggers (e.g., rally, KKK) are present. We also discuss cases where real world knowledge does not add value to hate speech detection, which provides more insights into understanding and modeling the subtleties of hate speech.
null
null
10.18653/v1/2022.nlp4pi-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,563
inproceedings
hansen-hershcovich-2022-dataset
A Dataset of Sustainable Diet Arguments on {T}witter
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.5/
Hansen, Marcus and Hershcovich, Daniel
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
40--58
Sustainable development requires a significant change in our dietary habits. Argument mining can help achieve this goal by both affecting and helping understand people`s behavior. We design an annotation scheme for argument mining from online discourse around sustainable diets, including novel evidence types specific to this domain. Using Twitter as a source, we crowdsource a dataset of 597 tweets annotated in relation to 5 topics. We benchmark a variety of NLP models on this dataset, demonstrating strong performance in some sub-tasks, while highlighting remaining challenges.
null
null
10.18653/v1/2022.nlp4pi-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,564
inproceedings
kelbessa-etal-2022-impacts
Impacts of Low Socio-economic Status on Educational Outcomes: A Narrative Based Analysis
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.6/
Kelbessa, Motti and Jamil, Ilyas and Jahan, Labiba
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
59--69
Socioeconomic status (SES) is a metric used to compare a person`s social standing based on their income, level of education, and occupation. Students from low SES backgrounds are those whose parents have low income and have limited access to the resources and opportunities they need to aid their success. Researchers have studied many issues and solutions for students with low SES, and there is a lot of research going on in many fields, especially in the social sciences. Computer science, however, has not yet as a field turned its considerable potential to addressing these inequalities. Utilizing Natural Language Processing (NLP) methods and technology, our work aims to address these disparities and ways to bridge the gap. We built a simple string matching algorithm including Latent Dirichlet Allocation (LDA) topic model and Open Information Extraction (open IE) to generate relational triples that are connected to the context of the students' challenges, and the strategies they follow to overcome them. We manually collected 16 narratives about the experiences of low SES students in higher education from a publicly accessible internet forum (Reddit) and tested our model on them. We demonstrate that our strategy is effective (from 37.50{\%} to 80{\%}) in gathering contextual data about low SES students, in particular, about their difficulties while in a higher educational institution and how they improve their situation. A detailed error analysis suggests that increase of data, improvement of the LDA model, and quality of triples can help get better results from our model. For the advantage of other researchers, we make our code available.
null
null
10.18653/v1/2022.nlp4pi-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,565
inproceedings
seeberger-riedhammer-2022-enhancing
Enhancing Crisis-Related Tweet Classification with Entity-Masked Language Modeling and Multi-Task Learning
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.9/
Seeberger, Philipp and Riedhammer, Korbinian
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
70--78
Social media has become an important information source for crisis management and provides quick access to ongoing developments and critical information. However, classification models suffer from event-related biases and highly imbalanced label distributions which still poses a challenging task. To address these challenges, we propose a combination of entity-masked language modeling and hierarchical multi-label classification as a multi-task learning problem. We evaluate our method on tweets from the TREC-IS dataset and show an absolute performance gain w.r.t. F1-score of up to 10{\%} for actionable information types. Moreover, we found that entity-masking reduces the effect of overfitting to in-domain events and enables improvements in cross-event generalization.
null
null
10.18653/v1/2022.nlp4pi-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,566
inproceedings
bohacek-2022-misinformation
Misinformation Detection in the Wild: News Source Classification as a Proxy for Non-article Texts
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.10/
Bohacek, Matyas
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
79--88
Creating classifiers of disinformation is time-consuming, expensive, and requires vast effort from experts spanning different fields. Even when these efforts succeed, their roll-out to publicly available applications stagnates. While these models struggle to find their consumer-accessible use, disinformation behavior online evolves at a pressing speed. The hoaxes get shared in various abbreviations on social networks, often in user-restricted areas, making external monitoring and intervention virtually impossible. To re-purpose existing NLP methods for the new paradigm of sharing misinformation, we propose leveraging information about given texts' originating news sources to proxy the respective text`s trustworthiness. We first present a methodology for determining the sources' overall credibility. We demonstrate our pipeline construction in a specific language and introduce CNSC: a novel dataset for Czech articles' news source and source credibility classification. We constitute initial benchmarks on multiple architectures. Lastly, we create in-the-wild wrapper applications of the trained models: a chatbot, a browser extension, and a standalone web application.
null
null
10.18653/v1/2022.nlp4pi-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,567
inproceedings
pauli-etal-2022-modelling
Modelling Persuasion through Misuse of Rhetorical Appeals
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.11/
Pauli, Amalie and Derczynski, Leon and Assent, Ira
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
89--100
It is important to understand how people use words to persuade each other. This helps understand debate, and detect persuasive narratives in regard to e.g. misinformation. While computational modelling of some aspects of persuasion has received some attention, a way to unify and describe the overall phenomenon of when persuasion becomes undesired and problematic, is missing. In this paper, we attempt to address this by proposing a taxonomy of computational persuasion. Drawing upon existing research and resources, this paper shows how to re-frame and re-organise current work into a coherent framework targeting the misuse of rhetorical appeals. As a study to validate these re-framings, we then train and evaluate models of persuasion adapted to our taxonomy. Our results show an application of our taxonomy, and we are able to detecting misuse of rhetorical appeals, finding that these are more often used in misinformative contexts than in true ones.
null
null
10.18653/v1/2022.nlp4pi-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,568
inproceedings
xiao-etal-2022-breaking
Breaking through Inequality of Information Acquisition among Social Classes: A Modest Effort on Measuring {\textquotedblleft}Fun{\textquotedblright}
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.12/
Xiao, Chenghao and Sun, Baicheng and Wang, Jindi and Liu, Mingyue and Feng, Jiayi
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
101--112
With the identification of the inequality encoded in information acquisition among social classes, we propose to leverage a powerful concept that has never been studied as a linguistic construct, {\textquotedblleft}fun{\textquotedblright}, to deconstruct the inequality. Inspired by theories in sociology, we draw connection between social class and information cocoon, through the lens of fun, and hypothesize the measurement of {\textquotedblleft}how fun one`s dominating social cocoon is{\textquotedblright} to be an indicator of the social class of an individual. Following this, we propose an NLP framework to combat the issue by measuring how fun one`s information cocoon is, and empower individuals to emancipate from their trapped cocoons. We position our work to be a domain-agnostic framework that can be deployed in a lot of downstream cases, and is one that aims to deconstruct, as opposed to reinforcing, the traditional social structure of beneficiaries.
null
null
10.18653/v1/2022.nlp4pi-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,569
inproceedings
chiruzzo-etal-2022-using
Using {NLP} to Support {E}nglish Teaching in Rural Schools
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.13/
Chiruzzo, Luis and Musto, Laura and Gongora, Santiago and Carpenter, Brian and Filevich, Juan and Rosa, Aiala
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
113--121
We present a web application for creating games and exercises for teaching English as a foreign language with the help of NLP tools. The application contains different kinds of games such as crosswords, word searches, a memory game, and a multiplayer game based on the classic battleship pen and paper game. This application was built with the aim of supporting teachers in rural schools that are teaching English lessons, so they can easily create interactive and engaging activities for their students. We present the context and history of the project, the current state of the web application, and some ideas on how we will expand it in the future.
null
null
10.18653/v1/2022.nlp4pi-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,570
inproceedings
verrap-etal-2022-answering
{\textquotedblleft}Am {I} Answering My Job Interview Questions Right?{\textquotedblright}: A {NLP} Approach to Predict Degree of Explanation in Job Interview Responses
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.14/
Verrap, Raghu and Nirjhar, Ehsanul and Nenkova, Ani and Chaspari, Theodora
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
122--129
Providing the right amount of explanation in an employment interview can help the interviewee effectively communicate their skills and experience to the interviewer and convince the she/he is the right candidate for the job. This paper examines natural language processing (NLP) approaches, including word-based tokenization, lexicon-based representations, and pre-trained embeddings with deep learning models, for detecting the degree of explanation in a job interview response. These are exemplified in a study of 24 military veterans who are the focal group of this study, since they can experience unique challenges in job interviews due to the unique verbal communication style that is prevalent in the military. Military veterans conducted mock interviews with industry recruiters and data from these interviews were transcribed and analyzed. Results indicate that the feasibility of automated NLP methods for detecting the degree of explanation in an interview response. Features based on tokenizer analysis are the most effective in detecting under-explained responses (i.e., 0.29 F1-score), while lexicon-based methods depict the higher performance in detecting over-explanation (i.e., 0.51 F1-score). Findings from this work lay the foundation for the design of intelligent assistive technologies that can provide personalized learning pathways to job candidates, especially those belonging to sensitive or under-represented populations, and helping them succeed in employment job interviews, ultimately contributing to an inclusive workforce.
null
null
10.18653/v1/2022.nlp4pi-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,571
inproceedings
perez-almendros-schockaert-2022-identifying
Identifying Condescending Language: A Tale of Two Distinct Phenomena?
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.15/
Perez Almendros, Carla and Schockaert, Steven
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
130--141
Patronizing and condescending language is characterized, among others, by its subtle nature. It thus seems reasonable to assume that detecting condescending language in text would be harder than detecting more explicitly harmful language, such as hate speech. However, the results of a SemEval-2022 Task devoted to this topic paint a different picture, with the top-performing systems achieving remarkably strong results. In this paper, we analyse the surprising effectiveness of standard text classification methods in more detail. In particular, we highlight the presence of two rather different types of condescending language in the dataset from the SemEval task. Some inputs are condescending because of the way they talk about a particular subject, i.e. condescending language in this case is a linguistic phenomenon, which can, in principle, be learned from training examples. However, other inputs are condescending because of the nature of what is said, rather than the way in which it is expressed, e.g. by emphasizing stereotypes about a given community. In such cases, our ability to detect condescending language, with current methods, largely depends on the presence of similar examples in the training data.
null
null
10.18653/v1/2022.nlp4pi-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,572
inproceedings
mahajan-2022-bela
{BELA}: Bot for {E}nglish Language Acquisition
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.17/
Mahajan, Muskan
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
142--148
In this paper, we introduce a conversational agent (chatbot) for Hindi-speaking youth called BELA{---}Bot for English Language Acquisition. Developed for young underprivileged students at an Indian non-profit, the agent supports both Hindi and Hinglish (code-switched Hindi and English, written primarily with English orthography) utterances. BELA has two interaction modes: a question-answering mode for classic English language learning tasks like word meanings, translations, reading passage comprehensions, etc., and an open-domain dialogue system mode to allow users to practice their language skills. We present a high-level overview of the design of BELA, including the implementation details and the preliminary results of our early prototype. We also report the challenges in creating an English-language learning chatbot for a largely Hindi-speaking population.
null
null
10.18653/v1/2022.nlp4pi-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,573
inproceedings
oh-etal-2022-applicability
Applicability of Pretrained Language Models: Automatic Screening for Children`s Language Development Level
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.18/
Oh, Byoung-doo and Lee, Yoon-koung and Kim, Yu-seop
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
149--156
The various potential of children can be limited by language delay or language impairments. However, there are many instances where parents are unaware of the child`s condition and do not obtain appropriate treatment as a result. Additionally, experts collecting children`s utterance to establish norms of language tests and evaluating children`s language development level takes a significant amount of time and work. To address these issues, dependable automated screening tools are required. In this paper, we used pretrained LM to assist experts in quickly and objectively screening the language development level of children. Here, evaluating the language development level is to ensure that the child has the appropriate language abilities for his or her age, which is the same as the child`s age. To do this, we analyzed the utterances of children according to age. Based on these findings, we use the standard deviations of the pretrained LM`s probability as a score for children to screen their language development level. The experiment results showed very strong correlations between our proposed method and the Korean language test REVT (REVT-R, REVT-E), with Pearson correlation coefficient of 0.9888 and 0.9892, respectively.
null
null
10.18653/v1/2022.nlp4pi-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,574
inproceedings
sandwidi-pallitharammal-mukkolakal-2022-transformers
Transformers-Based Approach for a Sustainability Term-Based Sentiment Analysis ({STBSA})
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.19/
Sandwidi, Blaise and Pallitharammal Mukkolakal, Suneer
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
157--170
Traditional sentiment analysis is a sentence level or document-level task. However, a sentence or paragraph may contain multiple target terms with different sentiments, making sentiment prediction more challenging. Although pre-trained language models like BERT have been successful, incorporating dynamic semantic changes into aspect-based sentiment models remains difficult, especially for domain-specific sentiment analysis. To this end, in this paper, we propose a Term-Based Sentiment Analysis (TBSA), a novel method designed to learn Environmental, Social, and Governance (ESG) contexts based on a sustainability taxonomy for ESG aspect-oriented sentiment analysis. Notably, we introduce a technique enhancing the ESG term`s attention, inspired by the success of attention-based neural networks in machine translation (Bahdanau et al., 2015) and Computer Vision (Bello et al., 2019). It enables the proposed model to focus on a small region of the sentences at each step and to reweigh the crucial terms for a better understanding of the ESG aspect-aware sentiment. Beyond the novelty in the model design, we propose a new dataset of 125,000+ ESG analyst annotated data points for sustainability term based sentiment classification, which derives from historical sustainability corpus data and expertise acquired by development finance institutions. Our extensive experiments combining the new method and the new dataset demonstrate the effectiveness of the Sustainability TBSA model with an accuracy of 91.30{\%} (90{\%} F1-score). Both internal and external business applications of our model show an evident potential for a significant positive impact toward furthering sustainable development goals (SDGs).
null
null
10.18653/v1/2022.nlp4pi-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,575
inproceedings
kumar-nandakumar-2022-hate
Hate-{CLIP}per: Multimodal Hateful Meme Classification based on Cross-modal Interaction of {CLIP} Features
Biester, Laura and Demszky, Dorottya and Jin, Zhijing and Sachan, Mrinmaya and Tetreault, Joel and Wilson, Steven and Xiao, Lu and Zhao, Jieyu
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4pi-1.20/
Kumar, Gokul Karthik and Nandakumar, Karthik
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
171--183
Hateful memes are a growing menace on social media. While the image and its corresponding text in a meme are related, they do not necessarily convey the same meaning when viewed individually. Hence, detecting hateful memes requires careful consideration of both visual and textual information. Multimodal pre-training can be beneficial for this task because it effectively captures the relationship between the image and the text by representing them in a similar feature space. Furthermore, it is essential to model the interactions between the image and text features through intermediate fusion. Most existing methods either employ multimodal pre-training or intermediate fusion, but not both. In this work, we propose the Hate-CLIPper architecture, which explicitly models the cross-modal interactions between the image and text representations obtained using Contrastive Language-Image Pre-training (CLIP) encoders via a feature interaction matrix (FIM). A simple classifier based on the FIM representation is able to achieve state-of-the-art performance on the Hateful Memes Challenge (HMC) dataset with an AUROC of 85.8, which even surpasses the human performance of 82.65. Experiments on other meme datasets such as Propaganda Memes and TamilMemes also demonstrate the generalizability of the proposed approach. Finally, we analyze the interpretability of the FIM representation and show that cross-modal interactions can indeed facilitate the learning of meaningful concepts. The code for this work is available at \url{https://github.com/gokulkarthik/hateclipper}
null
null
10.18653/v1/2022.nlp4pi-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,576
inproceedings
kawasaki-2022-stylometric
A Stylometric Analysis of Amad{\'i}s de Gaula and Sergas de Esplandi{\'a}n
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.1/
Kawasaki, Yoshifumi
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
1--7
Amad{\'i}s de Gaula (AG) and its sequel Sergas de Esplandi{\'a}n (SE) are masterpieces of medieval Spanish chivalric romances. Much debate has been devoted to the role played by their purported author Garci Rodr{\'i}guez de Montalvo. According to the prologue of AG, which consists of four books, the author allegedly revised the first three books that were in circulation at that time and added the fourth book and SE. However, the extent to which Montalvo edited the materials at hand to compose the extant works has yet to be explored extensively. To address this question, we applied stylometric techniques for the first time. Specifically, we investigated the stylistic differences (if any) between the first three books of AG and his own extensions. Literary style is represented as usage of parts-of-speech n-grams. We performed principal component analysis and k-means to demonstrate that Montalvo`s retouching on the first book was minimal, while revising the second and third books in such a way that they came to moderately resemble his authentic creation, that is, the fourth book and SE. Our findings empirically corroborate suppositions formulated from philological viewpoints.
null
null
10.18653/v1/2022.nlp4dh-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,578
inproceedings
ohman-rossi-2022-computational
Computational Exploration of the Origin of Mood in Literary Texts
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.2/
{\"Ohman, Emily and Rossi, Riikka H.
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
8--14
This paper is a methodological exploration of the origin of mood in early modern and modern Finnish literary texts using computational methods. We discuss the pre-processing steps as well as the various natural language processing tools used to try to pinpoint where mood can be best detected in text. We also share several tools and resources developed during this process. Our early attempts suggest that overall mood can be computationally detected in the first three paragraphs of a book.
null
null
10.18653/v1/2022.nlp4dh-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,579
inproceedings
mohapatra-mohapatra-2022-sentiment
Sentiment is all you need to win {US} Presidential elections
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.3/
Mohapatra, Sovesh and Mohapatra, Somesh
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
15--20
Election speeches play an integral role in communicating the vision and mission of the candidates. From lofty promises to mud-slinging, the electoral candidate accounts for all. However, there remains an open question about what exactly wins over the voters. In this work, we used state-of-the-art natural language processing methods to study the speeches and sentiments of the Republican candidates and Democratic candidates fighting for the 2020 US Presidential election. Comparing the racial dichotomy of the United States, we analyze what led to the victory and defeat of the different candidates. We believe this work will inform the election campaigning strategy and provide a basis for communicating to diverse crowds.
null
null
10.18653/v1/2022.nlp4dh-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,580
inproceedings
gonzalez-2022-interactive
Interactive Analysis and Visualisation of Annotated Collocations in {S}panish ({AVA}n{CES})
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.4/
Gonzalez, Simon
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
21--30
Phraseology studies have been enhanced by Corpus Linguistics, which has become an interdisciplinary field where current technologies play an important role in its development. Computational tools have been implemented in the last decades with positive results on the identification of phrases in different languages. One specific technology that has impacted these studies is social media. As researchers, we have turned our attention to collecting data from these platforms, which comes with great advantages and its own challenges. One of the challenges is the way we design and build corpora relevant to the questions emerging in this type of language expression. This has been approached from different angles, but one that has given invaluable outputs is the building of linguistic corpora with the use of online web applications. In this paper, we take a multidimensional approach to the collection, design, and deployment of a phraseology corpus for Latin American Spanish from Twitter data, extracting features using NLP techniques, and presenting it in an interactive online web application. We expect to contribute to the methodologies used for Corpus Linguistics in the current technological age. Finally, we make this tool publicly available to be used by any researcher interested in the data itself and also on the technological tools developed here.
null
null
10.18653/v1/2022.nlp4dh-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,581
inproceedings
bizzoni-etal-2022-fractality
Fractality of sentiment arcs for literary quality assessment: The case of Nobel laureates
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.5/
Bizzoni, Yuri and Nielbo, Kristoffer Laigaard and Thomsen, Mads Rosendahl
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
31--41
In the few works that have used NLP to study literary quality, sentiment and emotion analysis have often been considered valuable sources of information. At the same time, the idea that the nature and polarity of the sentiments expressed by a novel might have something to do with its perceived quality seems limited at best. In this paper, we argue that the fractality of narratives, specifically the long-term memory of their sentiment arcs, rather than their simple shape or average valence, might play an important role in the perception of literary quality by a human audience. In particular, we argue that such measure can help distinguish Nobel-winning writers from control groups in a recent corpus of English language novels. To test this hypothesis, we present the results from two studies: (i) a probability distribution test, where we compute the probability of seeing a title from a Nobel laureate at different levels of arc fractality; (ii) a classification test, where we use several machine learning algorithms to measure the predictive power of both sentiment arcs and their fractality measure. Our findings seem to indicate that despite the competitive and complex nature of the task, the populations of Nobel and non-Nobel laureates seem to behave differently and can to some extent be told apart by a classifier.
null
null
10.18653/v1/2022.nlp4dh-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,582
inproceedings
tannor-etal-2022-style
Style Classification of Rabbinic Literature for Detection of Lost Midrash Tanhuma Material
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.6/
Tannor, Solomon and Dershowitz, Nachum and Lavee, Moshe
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
42--46
Midrash collections are complex rabbinic works that consist of text in multiple languages, that evolved through long processes of instable oral and written transmission. Determining the origin of a given passage in such a compilation is not always straightforward and is often a matter disputed by scholars, yet it is essential for scholars' understanding of the passage and its relationship to other texts in the rabbinic corpus. To help solve this problem, we propose a system for classification of rabbinic literature based on its style, leveraging recently released pretrained Transformer models for Hebrew. Additionally, we demonstrate how our method can be applied to uncover lost material from the Midrash Tanhuma.
null
null
10.18653/v1/2022.nlp4dh-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,583
inproceedings
indig-etal-2022-use
Use the Metadata, Luke! {--} An Experimental Joint Metadata Search and N-gram Trend Viewer for Personal Web Archives
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.7/
Indig, Bal{\'azs and S{\'ark{\"ozi-Lindner, Zs{\'ofia and Nagy, Mih{\'aly
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
47--52
Many digital humanists (philologists, historians, sociologists, librarians, the audience for web archives) design their research around metadata (publication date ranges, sources, authors, etc.). However, current major web archives are limited to technical metadata while lacking high quality, descriptive metadata allowing for faceted queries. As researchers often lack the technical skill necessary to enrich existing web archives with descriptive metadata, they increasingly turn to creating personal web archives that contain such metadata, tailored to their research requirements. Software that enable creating such archives without advanced technical skills have gained popularity, however, tools for examination and querying are currently the missing link. We showcase a solution designed to fill this gap.
null
null
10.18653/v1/2022.nlp4dh-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,584
inproceedings
gupta-2022-malm
{MALM}: Mixing Augmented Language Modeling for Zero-Shot Machine Translation
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.8/
Gupta, Kshitij
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
53--58
Large pre-trained language models have brought remarkable progress in NLP. Pre-training and Fine-tuning have given state-of-art performance across tasks in text processing. Data Augmentation techniques have also helped build state-of-art models on low or zero resource tasks. Many works in the past have attempted at learning a single massively multilingual machine translation model for zero-shot translation. Although those translation models are producing correct translations, the main challenge is those models are producing the wrong languages for zero-shot translation. This work and its results indicate that prompt conditioned large models do not suffer from off-target language errors i.e. errors arising due to translation to wrong languages. We empirically demonstrate the effectiveness of self-supervised pre-training and data augmentation for zero-shot multi-lingual machine translation.
null
null
10.18653/v1/2022.nlp4dh-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,585
inproceedings
babaei-giglou-etal-2022-parssimpleqa
{P}ars{S}imple{QA}: The {P}ersian Simple Question Answering Dataset and System over Knowledge Graph
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.9/
Babaei Giglou, Hamed and Beyranvand, Niloufar and Moradi, Reza and Salehoof, Amir Mohammad and Bibak, Saeed
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
59--68
The simple question answering over the knowledge graph concerns answering single-relation questions by querying the facts in the knowledge graph. This task has drawn significant attention in recent years. However, there is a demand for a simple question dataset in the Persian language to study open-domain simple question answering. In this paper, we present the first Persian single-relation question answering dataset and a model that uses a knowledge graph as a source of knowledge to answer questions. We create the ParsSimpleQA dataset semi-automatically in two steps. First, we build single-relation question templates. Next, we automatically create simple questions and answers using templates, entities, and relations from Farsbase. To present the reliability of the presented dataset, we proposed a simple question-answering system that receives questions and uses deep learning and information retrieval techniques for answering questions. The experimental results presented in this paper show that the ParsSimpleQA dataset is very promising for the Persian simple question-answering task.
null
null
10.18653/v1/2022.nlp4dh-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,586
inproceedings
lin-peng-2022-enhancing
Enhancing Digital History {--} Event discovery via Topic Modeling and Change Detection
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.10/
Lin, King Ip and Peng, Sabrina
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
69--78
Digital history is the application of computer science techniques to historical data in order to uncover insights into events occurring during specific time periods from the past. This relatively new interdisciplinary field can help identify and record latent information about political, cultural, and economic trends that are not otherwise apparent from traditional historical analysis. This paper presents a method that uses topic modeling and breakpoint detection to observe how extracted topics come in and out of prominence over various time periods. We apply our techniques on British parliamentary speech data from the 19th century. Findings show that some of the events produced are cohesive in topic content (religion, transportation, economics, etc.) and time period (events are focused in the same year or month). Topic content identified should be further analyzed for specific events and undergo external validation to determine the quality and value of the findings to historians specializing in 19th century Britain.
null
null
10.18653/v1/2022.nlp4dh-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,587
inproceedings
zheng-etal-2022-parallel
A Parallel Corpus and Dictionary for {A}mis-{M}andarin Translation
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.11/
Zheng, Francis and Marrese-Taylor, Edison and Matsuo, Yutaka
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
79--84
Amis is an endangered language indigenous to Taiwan with limited data available for computational processing. We thus present an Amis-Mandarin dataset containing a parallel corpus of 5,751 Amis and Mandarin sentences and a dictionary of 7,800 Amis words and phrases with their definitions in Mandarin. Using our dataset, we also established a baseline for machine translation between Amis and Mandarin in both directions. Our dataset can be found at \url{https://github.com/francisdzheng/amis-mandarin}.
null
null
10.18653/v1/2022.nlp4dh-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,588
inproceedings
pedrazzini-mcgillivray-2022-machines
Machines in the media: semantic change in the lexicon of mechanization in 19th-century {B}ritish newspapers
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.12/
Pedrazzini, Nilo and McGillivray, Barbara
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
85--95
The industrialization process associated with the so-called Industrial Revolution in 19th-century Great Britain was a time of profound changes, including in the English lexicon. An important yet understudied phenomenon is the semantic shift in the lexicon of mechanisation. In this paper we present the first large-scale analysis of terms related to mechanization over the course of the 19th-century in English. We draw on a corpus of historical British newspapers comprising 4.6 billion tokens and train historical word embedding models. We test existing semantic change detection techniques and analyse the results in light of previous historical linguistic scholarship.
null
null
10.18653/v1/2022.nlp4dh-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,589
inproceedings
janicki-2022-optimizing
Optimizing the weighted sequence alignment algorithm for large-scale text similarity computation
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.13/
Janicki, Maciej
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
96--100
We present an optimized implementation of the weighted sequence alignment algorithm (a.k.a. weighted edit distance) in a scenario where the items to align are numeric vectors and the substitution weights are determined by their cosine similarity. The optimization relies on using vector and matrix operations provided by numeric computation libraries (including GPU acceleration) instead of loops. The resulting algorithm provides an efficient way of aligning large sets of texts represented as sequences of continuous-space numeric vectors (embeddings). The optimization made it possible to compute alignment-based similarity for all pairs of texts in a large corpus of Finnic oral folk poetry for the purpose of studying intertextuality in the oral tradition.
null
null
10.18653/v1/2022.nlp4dh-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,590
inproceedings
van-boven-bloem-2022-domain
Domain-specific Evaluation of Word Embeddings for Philosophical Text using Direct Intrinsic Evaluation
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.14/
van Boven, Goya and Bloem, Jelke
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
101--107
We perform a direct intrinsic evaluation of word embeddings trained on the works of a single philosopher. Six models are compared to human judgements elicited using two tasks: a synonym detection task and a coherence task. We apply a method that elicits judgements based on explicit knowledge from experts, as the linguistic intuition of non-expert participants might differ from that of the philosopher. We find that an in-domain SVD model has the best 1-nearest neighbours for target terms, while transfer learning-based Nonce2Vec performs better for low frequency target terms.
null
null
10.18653/v1/2022.nlp4dh-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,591
inproceedings
arcan-etal-2022-towards
Towards Bootstrapping a Chatbot on Industrial Heritage through Term and Relation Extraction
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.15/
Arcan, Mihael and O{'}Halloran, Rory and Robin, C{\'e}cile and Buitelaar, Paul
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
108--122
We describe initial work in developing a methodology for the automatic generation of a conversational agent or {\textquoteleft}chatbot' through term and relation extraction from a relevant corpus of language data. We develop our approach in the domain of industrial heritage in the 18th and 19th centuries, and more specifically on the industrial history of canals and mills in Ireland. We collected a corpus of relevant newspaper reports and Wikipedia articles, which we deemed representative of a layman`s understanding of this topic. We used the Saffron toolkit to extract relevant terms and relations between the terms from the corpus and leveraged the extracted knowledge to query the British Library Digital Collection and the Project Gutenberg library. We leveraged the extracted terms and relations in identifying possible answers for a constructed set of questions based on the extracted terms, by matching them with sentences in the British Library Digital Collection and the Project Gutenberg library. In a final step, we then took this data set of question-answer pairs to train a chatbot. We evaluate our approach by manually assessing the appropriateness of the generated answers for a random sample, each of which is judged by four annotators.
null
null
10.18653/v1/2022.nlp4dh-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,592
inproceedings
manjavacas-arevalo-fonteyn-2022-non
Non-Parametric Word Sense Disambiguation for Historical Languages
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.16/
Manjavacas Arevalo, Enrique and Fonteyn, Lauren
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
123--134
Recent approaches to Word Sense Disambiguation (WSD) have profited from the enhanced contextualized word representations coming from contemporary Large Language Models (LLMs). This advancement is accompanied by a renewed interest in WSD applications in Humanities research, where the lack of suitable, specific WSD-annotated resources is a hurdle in developing ad-hoc WSD systems. Because they can exploit sentential context, LLMs are particularly suited for disambiguation tasks. Still, the application of LLMs is often limited to linear classifiers trained on top of the LLM architecture. In this paper, we follow recent developments in non-parametric learning and show how LLMs can be efficiently fine-tuned to achieve strong few-shot performance on WSD for historical languages (English and Dutch, date range: 1450-1950). We test our hypothesis using (i) a large, general evaluation set taken from large lexical databases, and (ii) a small real-world scenario involving an ad-hoc WSD task. Moreover, this paper marks the release of GysBERT, a LLM for historical Dutch.
null
null
10.18653/v1/2022.nlp4dh-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,593
inproceedings
liu-etal-2022-introducing
Introducing a Large Corpus of Tokenized Classical {C}hinese Poems of Tang and Song Dynasties
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.17/
Liu, Chao-Lin and Zheng, Ti-Yong and Chen, Kuan-Chun and Chung, Meng-Han
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
135--144
Classical Chinese poems of Tang and Song dynasties are an important part for the studies of Chinese literature. To thoroughly understand the poems, properly segmenting the verses is an important step for human readers and software agents. Yet, due to the availability of data and the costs of annotation, there are still no known large and useful sources that offer classical Chinese poems with annotated word boundaries. In this project, annotators with Chinese literature background labeled 32399 poems. We analyzed the annotated patterns and conducted inter-rater agreement studies about the annotations. The distributions of the annotated patterns for poem lines are very close to some well-known professional heuristics, i.e., that the 2-2-1, 2-1-2, 2-2-1-2, and 2-2-2-1 patterns are very frequent. The annotators agreed well at the line level, but agreed on the segmentations of a whole poem only 43{\%} of the time. We applied a traditional machine-learning approach to segment the poems, and achieved promising results at the line level as well. Using the annotated data as the ground truth, these methods could segment only about 18{\%} of the poems completely right under favorable conditions. Switching to deep-learning methods helped us achieved better than 30{\%}.
null
null
10.18653/v1/2022.nlp4dh-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,594
inproceedings
russo-2022-creative
Creative Text-to-Image Generation: Suggestions for a Benchmark
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.18/
Russo, Irene
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
145--154
Language models for text-to-image generation can output good quality images when referential aspects of pictures are evaluated. The generation of creative images is not under scrutiny at the moment, but it poses interesting challenges: should we expect more creative images using more creative prompts? What is the relationship between prompts and images in the global process of human evaluation? In this paper, we want to highlight several criteria that should be taken into account for building a creative text-to-image generation benchmark, collecting insights from multiple disciplines (e.g., linguistics, cognitive psychology, philosophy, psychology of art).
null
null
10.18653/v1/2022.nlp4dh-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,595
inproceedings
piper-erlin-2022-predictability
The predictability of literary translation
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.19/
Piper, Andrew and Erlin, Matt
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
155--160
Research has shown that the practice of translation exhibits predictable linguistic cues that make translated texts detectable from original-language texts (a phenomenon known as {\textquotedblleft}translationese{\textquotedblright}). In this paper, we test the extent to which literary translations are subject to the same effects and whether they also exhibit meaningful differences at the level of content. Research into the function of translations within national literary markets using smaller case studies has suggested that translations play a cultural role that is distinct from that of original-language literature, i.e. their differences reside not only at the level of translationese but at the level of content. Using a dataset consisting of original-language fiction in English and translations into English from 120 languages (N=21,302), we find that one of the principal functions of literary translation is to convey predictable geographic identities to local readers that nevertheless extend well beyond the foreignness of persons and places.
null
null
10.18653/v1/2022.nlp4dh-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,596
inproceedings
alnajjar-hamalainen-2022-emotion
Emotion Conditioned Creative Dialog Generation
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.20/
Alnajjar, Khalid and H{\"am{\"al{\"ainen, Mika
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
161--166
We present a DialGPT based model for generating creative dialog responses that are conditioned based on one of the following emotions: anger, disgust, fear, happiness, pain, sadness and surprise. Our model is capable of producing a contextually apt response given an input sentence and a desired emotion label. Our model is capable of expressing the desired emotion with an accuracy of 0.6. The best performing emotions are neutral, fear and disgust. When measuring the strength of the expressed emotion, we find that anger, fear and disgust are expressed in the most strong fashion by the model.
null
null
10.18653/v1/2022.nlp4dh-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,597
inproceedings
ge-2022-integration
Integration of Named Entity Recognition and Sentence Segmentation on {A}ncient {C}hinese based on Siku-{BERT}
H{\"am{\"al{\"ainen, Mika and Alnajjar, Khalid and Partanen, Niko and Rueter, Jack
nov
2022
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4dh-1.21/
Ge, Sijia
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
167--173
Sentence segmentation and named entity recognition are two significant tasks in ancient Chinese processing since punctuation and named entity information are important for further research on ancient classics. These two are sequence labeling tasks in essence so we can tag the labels of these two tasks for each token simultaneously. Our work is to evaluate whether such a unified way would be better than tagging the label of each task separately with a BERT-based model. The paper adopts a BERT-based model that was pre-trained on ancient Chinese text to conduct experiments on Zuozhuan text. The results show there is no difference between these two tagging approaches without concerning the type of entities and punctuation. The ablation experiments show that the punctuation token in the text is useful for NER tasks, and finer tagging sets such as differentiating the tokens that locate at the end of an entity and those are in the middle of an entity could offer a useful feature for NER while impact negatively sentences segmentation with unified tagging.
null
null
10.18653/v1/2022.nlp4dh-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,598
inproceedings
lee-etal-2022-randomized
A Randomized Link Transformer for Diverse Open-Domain Dialogue Generation
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.1/
Lee, Jing Yang and Lee, Kong Aik and Gan, Woon Seng
Proceedings of the 4th Workshop on NLP for Conversational AI
1--11
A major issue in open-domain dialogue generation is the agent`s tendency to generate repetitive and generic responses. The lack in response diversity has been addressed in recent years via the use of latent variable models, such as the Conditional Variational Auto-Encoder (CVAE), which typically involve learning a latent Gaussian distribution over potential response intents. However, due to latent variable collapse, training latent variable dialogue models are notoriously complex, requiring substantial modification to the standard training process and loss function. Other approaches proposed to improve response diversity also largely entail a significant increase in training complexity. Hence, this paper proposes a Randomized Link (RL) Transformer as an alternative to the latent variable models. The RL Transformer does not require any additional enhancements to the training process or loss function. Empirical results show that, when it comes to response diversity, the RL Transformer achieved comparable performance compared to latent variable models.
null
null
10.18653/v1/2022.nlp4convai-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,601
inproceedings
zhang-etal-2022-pre-trained
Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.2/
Zhang, Jianguo and Hashimoto, Kazuma and Wan, Yao and Liu, Zhiwei and Liu, Ye and Xiong, Caiming and Yu, Philip
Proceedings of the 4th Workshop on NLP for Conversational AI
12--20
Pre-trained Transformer-based models were reported to be robust in intent classification. In this work, we first point out the importance of in-domain out-of-scope detection in few-shot intent recognition tasks and then illustrate the vulnerability of pre-trained Transformer-based models against samples that are in-domain but out-of-scope (ID-OOS). We construct two new datasets, and empirically show that pre-trained models do not perform well on both ID-OOS examples and general out-of-scope examples, especially on fine-grained few-shot intent detection tasks.
null
null
10.18653/v1/2022.nlp4convai-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,602
inproceedings
liao-etal-2022-conversational
Conversational {AI} for Positive-sum Retailing under Falsehood Control
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.3/
Liao, Yin-Hsiang and Dong, Ruo-Ping and Chang, Huan-Cheng and Ma, Wilson
Proceedings of the 4th Workshop on NLP for Conversational AI
21--33
Retailing combines complicated communication skills and strategies to reach an agreement between buyer and seller with identical or different goals. In each transaction a good seller finds an optimal solution by considering his/her own profits while simultaneously considering whether the buyer`s needs have been met. In this paper, we manage the retailing problem by mixing cooperation and competition. We present a rich dataset of buyer-seller bargaining in a simulated marketplace in which each agent values goods and utility separately. Various attributes (preference, quality, and profit) are initially hidden from one agent with respect to its role; during the conversation, both sides may reveal, fake, or retain the information uncovered to come to a final decision through natural language. Using this dataset, we leverage transfer learning techniques on a pretrained, end-to-end model and enhance its decision-making ability toward the best choice in terms of utility by means of multi-agent reinforcement learning. An automatic evaluation shows that our approach results in more optimal transactions than human does. We also show that our framework controls the falsehoods generated by seller agents.
null
null
10.18653/v1/2022.nlp4convai-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,603
inproceedings
albalak-etal-2022-rex
{D}-{REX}: Dialogue Relation Extraction with Explanations
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.4/
Albalak, Alon and Embar, Varun and Tuan, Yi-Lin and Getoor, Lise and Wang, William Yang
Proceedings of the 4th Workshop on NLP for Conversational AI
34--46
Existing research studies on cross-sentence relation extraction in long-form multi-party conversations aim to improve relation extraction without considering the explainability of such methods. This work addresses that gap by focusing on extracting explanations that indicate that a relation exists while using only partially labeled explanations. We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that optimizes for explanation quality and relation extraction simultaneously. We frame relation extraction as a re-ranking task and include relation- and entity-specific explanations as an intermediate step of the inference process. We find that human annotators are 4.2 times more likely to prefer D-REX`s explanations over a joint relation extraction and explanation model. Finally, our evaluations show that D-REX is simple yet effective and improves relation extraction performance of strong baseline models by 1.2-4.7{\%}.
null
null
10.18653/v1/2022.nlp4convai-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,604
inproceedings
sahu-etal-2022-data
Data Augmentation for Intent Classification with Off-the-shelf Large Language Models
Liu, Bing and Papangelis, Alexandros and Ultes, Stefan and Rastogi, Abhinav and Chen, Yun-Nung and Spithourakis, Georgios and Nouri, Elnaz and Shi, Weiyan
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.nlp4convai-1.5/
Sahu, Gaurav and Rodriguez, Pau and Laradji, Issam and Atighehchian, Parmida and Vazquez, David and Bahdanau, Dzmitry
Proceedings of the 4th Workshop on NLP for Conversational AI
47--57
Data augmentation is a widely employed technique to alleviate the problem of data scarcity. In this work, we propose a prompting-based approach to generate labelled training data for intent classification with off-the-shelf language models (LMs) such as GPT-3. An advantage of this method is that no task-specific LM-fine-tuning for data generation is required; hence the method requires no hyper parameter tuning and is applicable even when the available training data is very scarce. We evaluate the proposed method in a few-shot setting on four diverse intent classification tasks. We find that GPT-generated data significantly boosts the performance of intent classifiers when intents in consideration are sufficiently distinct from each other. In tasks with semantically close intents, we observe that the generated data is less helpful. Our analysis shows that this is because GPT often generates utterances that belong to a closely-related intent instead of the desired one. We present preliminary evidence that a prompting-based GPT classifier could be helpful in filtering the generated data to enhance its quality.
null
null
10.18653/v1/2022.nlp4convai-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,605