entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings
|
hoefels-etal-2022-coroseof
|
{C}o{R}o{S}e{O}f - An Annotated Corpus of {R}omanian Sexist and Offensive Tweets
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.243/
|
Hoefels, Diana Constantina and {\c{C{\"oltekin, {\c{Ca{\u{gr{\i and M{\u{adroane, Irina Diana
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2269--2281
|
This paper introduces CoRoSeOf, a large corpus of Romanian social media manually annotated for sexist and offensive language. We describe the annotation process of the corpus, provide initial analyses, and baseline classification results for sexism detection on this data set. The resulting corpus contains 39 245 tweets, annotated by multiple annotators (with an agreement rate of Fleiss'{\ensuremath{\kappa}}= 0.45), following the sexist label set of a recent study. The automatic sexism detection yields scores similar to some of the earlier studies (macro averaged F1 score of 83.07{\%} on binary classification task). We release the corpus with a permissive license.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,650 |
inproceedings
|
almanea-poesio-2022-armis
|
{A}r{MIS} - The {A}rabic Misogyny and Sexism Corpus with Annotator Subjective Disagreements
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.244/
|
Almanea, Dina and Poesio, Massimo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2282--2291
|
The use of misogynistic and sexist language has increased in recent years in social media, and is increasing in the Arabic world in reaction to reforms attempting to remove restrictions on women lives. However, there are few benchmarks for Arabic misogyny and sexism detection, and in those the annotations are in aggregated form even though misogyny and sexism judgments are found to be highly subjective. In this paper we introduce an Arabic misogyny and sexism dataset (ArMIS) characterized by providing annotations from annotators with different degree of religious beliefs, and provide evidence that such differences do result in disagreements. To the best of our knowledge, this is the first dataset to study in detail the effect of beliefs on misogyny and sexism annotation. We also discuss proof-of-concept experiments showing that a dataset in which disagreements have not been reconciled can be used to train state-of-the-art models for misogyny and sexism detection; and consider different ways in which such models could be evaluated.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,651 |
inproceedings
|
yang-etal-2022-annotating
|
Annotating Interruption in Dyadic Human Interaction
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.245/
|
Yang, Liu and Achard, Catherine and Pelachaud, Catherine
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2292--2297
|
Integrating the existing interruption and turn switch classification methods, we propose a new annotation schema to annotate different types of interruptions through timeliness, switch accomplishment and speech content level. The proposed method is able to distinguish smooth turn exchange, backchannel and interruption (including interruption types) and to annotate dyadic conversation. We annotated the French part of NoXi corpus with the proposed structure and use these annotations to study the probability distribution and duration of each turn switch type.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,652 |
inproceedings
|
tan-etal-2022-causal
|
The Causal News Corpus: Annotating Causal Relations in Event Sentences from News
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.246/
|
Tan, Fiona Anting and H{\"urriyeto{\u{glu, Ali and Caselli, Tommaso and Oostdijk, Nelleke and Nomoto, Tadashi and Hettiarachchi, Hansi and Ameer, Iqra and Uca, Onur and Liza, Farhana Ferdousi and Hu, Tiancheng
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2298--2310
|
Despite the importance of understanding causality, corpora addressing causal relations are limited. There is a discrepancy between existing annotation guidelines of event causality and conventional causality corpora that focus more on linguistics. Many guidelines restrict themselves to include only explicit relations or clause-based arguments. Therefore, we propose an annotation schema for event causality that addresses these concerns. We annotated 3,559 event sentences from protest event news with labels on whether it contains causal relations or not. Our corpus is known as the Causal News Corpus (CNC). A neural network built upon a state-of-the-art pre-trained language model performed well with 81.20{\%} F1 score on test set, and 83.46{\%} in 5-folds cross-validation. CNC is transferable across two external corpora: CausalTimeBank (CTB) and Penn Discourse Treebank (PDTB). Leveraging each of these external datasets for training, we achieved up to approximately 64{\%} F1 on the CNC test set without additional fine-tuning. CNC also served as an effective training and pre-training dataset for the two external corpora. Lastly, we demonstrate the difficulty of our task to the layman in a crowd-sourced annotation exercise. Our annotated corpus is publicly available, providing a valuable resource for causal text mining researchers.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,653 |
inproceedings
|
hedstrom-etal-2022-samromur
|
Samr{\'o}mur: Crowd-sourcing large amounts of data
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.247/
|
Hedstr{\"om, Staffan and Mollberg, David Erik and {\TH{\'orhallsd{\'ottir, Ragnhei{\dhur and Gu{\dhnason, J{\'on
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2311--2316
|
This contribution describes the collection of a large and diverse corpus for speech recognition and similar tools using crowd-sourced donations. We have built a collection platform inspired by Mozilla Common Voice and specialized it to our needs. We discuss the importance of engaging the community and motivating it to contribute, in our case through competitions. Given the incentive and a platform to easily read in large amounts of utterances, we have observed four cases of speakers freely donating over 10 thousand utterances. We have also seen that women are keener to participate in these events throughout all age groups. Manually verifying a large corpus is a monumental task and we attempt to automatically verify parts of the data using tools like Marosijo and the Montreal Forced Aligner. The method proved helpful, especially for detecting invalid utterances and halving the work needed from crowd-sourced verification.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,654 |
inproceedings
|
roller-etal-2022-annotated
|
An Annotated Corpus of Textual Explanations for Clinical Decision Support
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.248/
|
Roller, Roland and Burchardt, Aljoscha and Feldhus, Nils and Seiffe, Laura and Budde, Klemens and Ronicke, Simon and Osmanodja, Bilgin
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2317--2326
|
In recent years, machine learning for clinical decision support has gained more and more attention. In order to introduce such applications into clinical practice, a good performance might be essential, however, the aspect of trust should not be underestimated. For the treating physician using such a system and being (legally) responsible for the decision made, it is particularly important to understand the system`s recommendation. To provide insights into a model`s decision, various techniques from the field of explainability (XAI) have been proposed whose output is often enough not targeted to the domain experts that want to use the model. To close this gap, in this work, we explore how explanations could possibly look like in future. To this end, this work presents a dataset of textual explanations in context of decision support. Within a reader study, human physicians estimated the likelihood of possible negative patient outcomes in the near future and justified each decision with a few sentences. Using those sentences, we created a novel corpus, annotated with different semantic layers. Moreover, we provide an analysis of how those explanations are constructed, and how they change depending on physician, on the estimated risk and also in comparison to an automatic clinical decision support system with feature importance.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,655 |
inproceedings
|
passali-etal-2022-lard
|
{LARD}: Large-scale Artificial Disfluency Generation
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.249/
|
Passali, Tatiana and Mavropoulos, Thanassis and Tsoumakas, Grigorios and Meditskos, Georgios and Vrochidis, Stefanos
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2327--2336
|
Disfluency detection is a critical task in real-time dialogue systems. However, despite its importance, it remains a relatively unexplored field, mainly due to the lack of appropriate datasets. At the same time, existing datasets suffer from various issues, including class imbalance issues, which can significantly affect the performance of the model on rare classes, as it is demonstrated in this paper. To this end, we propose LARD, a method for generating complex and realistic artificial disfluencies with little effort. The proposed method can handle three of the most common types of disfluencies: repetitions, replacements, and restarts. In addition, we release a new large-scale dataset with disfluencies that can be used on four different tasks: disfluency detection, classification, extraction, and correction. Experimental results on the LARD dataset demonstrate that the data produced by the proposed method can be effectively used for detecting and removing disfluencies, while also addressing limitations of existing datasets.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,656 |
inproceedings
|
jiang-etal-2022-crecil
|
The {CRECIL} Corpus: a New Dataset for Extraction of Relations between Characters in {C}hinese Multi-party Dialogues
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.250/
|
Jiang, Yuru and Xu, Yang and Zhan, Yuhang and He, Weikai and Wang, Yilin and Xi, Zixuan and Wang, Meiyun and Li, Xinyu and Li, Yu and Yu, Yanchao
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2337--2344
|
We describe a new freely available Chinese multi-party dialogue dataset for automatic extraction of dialogue-based character relationships. The data has been extracted from the original TV scripts of a Chinese sitcom called {\textquotedblleft}I Love My Home{\textquotedblright} with complex family-based human daily spoken conversations in Chinese. First, we introduced human annotation scheme for both global Character relationship map and character reference relationship. And then we generated the dialogue-based character relationship triples. The corpus annotates relationships between 140 entities in total. We also carried out a data exploration experiment by deploying a BERT-based model to extract character relationships on the CRECIL corpus and another existing relation extraction corpus (DialogRE (CITATION)).The results demonstrate that extracting character relationships is more challenging in CRECIL than in DialogRE.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,657 |
inproceedings
|
abdulrahim-etal-2022-bahrain
|
The {B}ahrain Corpus: A Multi-genre Corpus of Bahraini {A}rabic
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.251/
|
Abdulrahim, Dana and Inoue, Go and Shamsan, Latifa and Khalifa, Salam and Habash, Nizar
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2345--2352
|
In recent years, the focus on developing natural language processing (NLP) tools for Arabic has shifted from Modern Standard Arabic to various Arabic dialects. Various corpora of various sizes and representing different genres, have been created for a number of Arabic dialects. As far as Gulf Arabic is concerned, Gumar Corpus (Khalifa et al., 2016) is the largest corpus, to date, that includes data representing the dialectal Arabic of the six Gulf Cooperation Council countries (Bahrain, Kuwait, Saudi Arabia, Qatar, United Arab Emirates, and Oman), particularly in the genre of {\textquotedblleft}online forum novels{\textquotedblright}. In this paper, we present the Bahrain Corpus. Our objective is to create a specialized corpus of the Bahraini Arabic dialect, which includes written texts as well as transcripts of audio files, belonging to a different genre (folktales, comedy shows, plays, cooking shows, etc.). The corpus comprises 620K words, carefully curated. We provide automatic morphological annotations of the full corpus using state-of-the-art morphosyntactic disambiguation for Gulf Arabic. We validate the quality of the annotations on a 7.6K word sample. We plan to make the annotated sample as well as the full corpus publicly available to support researchers interested in Arabic NLP.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,658 |
inproceedings
|
swanson-tyers-2022-universal
|
A {U}niversal {D}ependencies Treebank of {A}ncient {H}ebrew
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.252/
|
Swanson, Daniel and Tyers, Francis
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2353--2361
|
In this paper we present the initial construction of a Universal Dependencies treebank with morphological annotations of Ancient Hebrew containing portions of the Hebrew Scriptures (1579 sentences, 27K tokens) for use in comparative study with ancient translations and for analysis of the development of Hebrew syntax. We construct this treebank by applying a rule-based parser (300 rules) to an existing morphologically-annotated corpus with minimal constituency structure and manually verifying the output and present the results of this semi-automated annotation process and some of the annotation decisions made in the process of applying the UD guidelines to a new language.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,659 |
inproceedings
|
carvalho-etal-2022-hate
|
Hate Speech Dynamics Against {A}frican descent, {R}oma and {LGBTQI} Communities in {P}ortugal
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.253/
|
Carvalho, Paula and Cunha, Bernardo and Santos, Raquel and Batista, Fernando and Ribeiro, Ricardo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2362--2370
|
This paper introduces FIGHT, a dataset containing 63,450 tweets, posted before and after the official declaration of Covid-19 as a pandemic by online users in Portugal. This resource aims at contributing to the analysis of online hate speech targeting the most representative minorities in Portugal, namely the African descent and the Roma communities, and the LGBTQI community, the most commonly reported target of hate speech in social media at the European context. We present the methods for collecting the data, and provide insightful statistics on the distribution of tweets included in FIGHT, considering both the temporal and spatial dimensions. We also analyze the availability over time of tweets targeting the above-mentioned communities, distinguishing public, private and deleted tweets. We believe this study will contribute to better understand the dynamics of online hate speech in Portugal, particularly in adverse contexts, such as a pandemic outbreak, allowing the development of more informed and accurate hate speech resources for Portuguese.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,660 |
inproceedings
|
barkarson-etal-2022-evolving
|
Evolving Large Text Corpora: Four Versions of the {I}celandic {G}igaword Corpus
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.254/
|
Barkarson, Starka{\dh}ur and Steingr{\'i}msson, Stein{\th}{\'o}r and Hafsteinsd{\'o}ttir, Hildur
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2371--2381
|
The Icelandic Gigaword Corpus was first published in 2018. Since then new versions have been published annually, containing new texts from additional sources as well as from previous sources. This paper describes the evolution of the corpus in its first four years. All versions are made available under permissive licenses and with each new version the texts are annotated with the latest and most accurate tools. We show how the corpus has grown almost 50{\%} in size from the first version to the fourth and how it was restructured in order to better accommodate different meta-data for different subcorpora. Furthermore, other services have been set up to facilitate usage of the corpus for different use cases. These include a keyword-in-context concordance tool, an n-gram viewer, a word frequency database and pre-trained word embeddings.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,661 |
inproceedings
|
sileo-etal-2022-pragmatics
|
A Pragmatics-Centered Evaluation Framework for Natural Language Understanding
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.255/
|
Sileo, Damien and Muller, Philippe and Van de Cruys, Tim and Pradel, Camille
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2382--2394
|
New models for natural language understanding have recently made an unparalleled amount of progress, which has led some researchers to suggest that the models induce universal text representations. However, current benchmarks are predominantly targeting semantic phenomena; we make the case that pragmatics needs to take center stage in the evaluation of natural language understanding. We introduce PragmEval, a new benchmark for the evaluation of natural language understanding, that unites 11 pragmatics-focused evaluation datasets for English. PragmEval can be used as supplementary training data in a multi-task learning setup, and is publicly available, alongside the code for gathering and preprocessing the datasets. Using our evaluation suite, we show that natural language inference, a widely used pretraining task, does not result in genuinely universal representations, which presents a new challenge for multi-task learning.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,662 |
inproceedings
|
bothe-wermter-2022-conversational
|
Conversational Analysis of Daily Dialog Data using Polite Emotional Dialogue Acts
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.256/
|
Bothe, Chandrakant and Wermter, Stefan
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2395--2400
|
Many socio-linguistic cues are used in conversational analysis, such as emotion, sentiment, and dialogue acts. One of the fundamental social cues is politeness, which linguistically possesses properties such as social manners useful in conversational analysis. This article presents findings of polite emotional dialogue act associations, where we can correlate the relationships between the socio-linguistic cues. We confirm our hypothesis that the utterances with the emotion classes Anger and Disgust are more likely to be impolite. At the same time, Happiness and Sadness are more likely to be polite. A less expectable phenomenon occurs with dialogue acts Inform and Commissive which contain more polite utterances than Question and Directive. Finally, we conclude on the future work of these findings to extend the learning of social behaviours using politeness.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,663 |
inproceedings
|
chiarcos-2022-inducing
|
Inducing Discourse Marker Inventories from Lexical Knowledge Graphs
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.257/
|
Chiarcos, Christian
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2401--2412
|
Discourse marker inventories are important tools for the development of both discourse parsers and corpora with discourse annotations. In this paper we explore the potential of massively multilingual lexical knowledge graphs to induce multilingual discourse marker lexicons using concept propagation methods as previously developed in the context of translation inference across dictionaries. Given one or multiple source languages with discourse marker inventories that discourse relations as senses of potential discourse markers, as well as a large number of bilingual dictionaries that link them {--} directly or indirectly {--} with the target language, we specifically study to what extent discourse marker induction can benefit from the integration of information from different sources, the impact of sense granularity and what limiting factors may need to be considered. Our study uses discourse marker inventories from nine European languages normalized against the discourse relation inventory of the Penn Discourse Treebank (PDTB), as well as three collections of machine-readable dictionaries with different characteristics, so that the interplay of a large number of factors can be studied.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,664 |
inproceedings
|
haghighatkhah-etal-2022-story
|
Story Trees: Representing Documents using Topological Persistence
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.258/
|
Haghighatkhah, Pantea and Fokkens, Antske and Sommerauer, Pia and Speckmann, Bettina and Verbeek, Kevin
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2413--2429
|
Topological Data Analysis (TDA) focuses on the inherent shape of (spatial) data. As such, it may provide useful methods to explore spatial representations of linguistic data (embeddings) which have become central in NLP. In this paper we aim to introduce TDA to researchers in language technology. We use TDA to represent document structure as so-called story trees. Story trees are hierarchical representations created from semantic vector representations of sentences via persistent homology. They can be used to identify and clearly visualize prominent components of a story line. We showcase their potential by using story trees to create extractive summaries for news stories.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,665 |
inproceedings
|
zwitter-vitez-etal-2022-extracting
|
Extracting and Analysing Metaphors in Migration Media Discourse: towards a Metaphor Annotation Scheme
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.259/
|
Zwitter Vitez, Ana and Brglez, Mojca and Robnik {\v{S}}ikonja, Marko and {\v{S}}kvorc, Tadej and Vezovnik, Andreja and Pollak, Senja
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2430--2439
|
The study of metaphors in media discourse is an increasingly researched topic as media are an important shaper of social reality and metaphors are an indicator of how we think about certain issues through references to other things. We present a neural transfer learning method for detecting metaphorical sentences in Slovene and evaluate its performance on a gold standard corpus of metaphors (classification accuracy of 0.725), as well as on a sample of a domain specific corpus of migrations (precision of 0.40 for extracting domain metaphors and 0.74 if evaluated only on a set of migration related sentences). Based on empirical results and findings of our analysis, we propose a novel metaphor annotation scheme containing linguistic level, conceptual level, and stance information. The new scheme can be used for future metaphor annotations of other socially relevant topics.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,666 |
inproceedings
|
flansmose-mikkelsen-etal-2022-ddisco
|
{DD}is{C}o: A Discourse Coherence Dataset for {D}anish
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.260/
|
Flansmose Mikkelsen, Linea and Kinch, Oliver and Jess Pedersen, Anders and Lacroix, Oph{\'e}lie
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2440--2445
|
To date, there has been no resource for studying discourse coherence on real-world Danish texts. Discourse coherence has mostly been approached with the assumption that incoherent texts can be represented by coherent texts in which sentences have been shuffled. However, incoherent real-world texts rarely resemble that. We thus present DDisCo, a dataset including text from the Danish Wikipedia and Reddit annotated for discourse coherence. We choose to annotate real-world texts instead of relying on artificially incoherent text for training and testing models. Then, we evaluate the performance of several methods, including neural networks, on the dataset.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,667 |
inproceedings
|
mim-etal-2022-lpattack
|
{LPA}ttack: A Feasible Annotation Scheme for Capturing Logic Pattern of Attacks in Arguments
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.261/
|
Mim, Farjana Sultana and Inoue, Naoya and Naito, Shoichi and Singh, Keshav and Inui, Kentaro
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2446--2459
|
In argumentative discourse, persuasion is often achieved by refuting or attacking others' arguments. Attacking an argument is not always straightforward and often consists of complex rhetorical moves in which arguers may agree with a logic of an argument while attacking another logic. Furthermore, an arguer may neither deny nor agree with any logics of an argument, instead ignore them and attack the main stance of the argument by providing new logics and presupposing that the new logics have more value or importance than the logics presented in the attacked argument. However, there are no studies in computational argumentation that capture such complex rhetorical moves in attacks or the presuppositions or value judgments in them. To address this gap, we introduce LPAttack, a novel annotation scheme that captures the common modes and complex rhetorical moves in attacks along with the implicit presuppositions and value judgments. Our annotation study shows moderate inter-annotator agreement, indicating that human annotation for the proposed scheme is feasible. We publicly release our annotated corpus and the annotation guidelines.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,668 |
inproceedings
|
tracey-etal-2022-best
|
{B}e{S}t: The Belief and Sentiment Corpus
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.262/
|
Tracey, Jennifer and Rambow, Owen and Cardie, Claire and Dalton, Adam and Dang, Hoa Trang and Diab, Mona and Dorr, Bonnie and Guthrie, Louise and Markowska, Magdalena and Muresan, Smaranda and Prabhakaran, Vinodkumar and Shaikh, Samira and Strzalkowski, Tomek
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2460--2467
|
We present the BeSt corpus, which records cognitive state: who believes what (i.e., factuality), and who has what sentiment towards what. This corpus is inspired by similar source-and-target corpora, specifically MPQA and FactBank. The corpus comprises two genres, newswire and discussion forums, in three languages, Chinese (Mandarin), English, and Spanish. The corpus is distributed through the LDC.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,669 |
inproceedings
|
wang-etal-2022-motif
|
{MOTIF}: Contextualized Images for Complex Words to Improve Human Reading
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.263/
|
Wang, Xintong and Schneider, Florian and Alacam, {\"Ozge and Chaudhury, Prateek and Biemann, Chris
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2468--2477
|
MOTIF (MultimOdal ConTextualized Images For Language Learners) is a multimodal dataset that consists of 1125 comprehension texts retrieved from Wikipedia Simple Corpus. Allowing multimodal processing or enriching the context with multimodal information has proven imperative for many learning tasks, specifically for second language (L2) learning. In this respect, several traditional NLP approaches can assist L2 readers in text comprehension processes, such as simplifying text or giving dictionary descriptions for complex words. As nicely stated in the well-known proverb, sometimes {\textquotedblleft}a picture is worth a thousand words{\textquotedblright} and an image can successfully complement the verbal message by enriching the representation, like in Pictionary books. This multimodal support can also assist on-the-fly text reading experience by providing a multimodal tool that chooses and displays the most relevant images for the difficult words, given the text context. This study mainly focuses on one of the key components to achieving this goal; collecting a multimodal dataset enriched with complex word annotation and validated image match.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,670 |
inproceedings
|
de-sisto-etal-2022-challenges
|
Challenges with Sign Language Datasets for Sign Language Recognition and Translation
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.264/
|
De Sisto, Mirella and Vandeghinste, Vincent and Egea G{\'o}mez, Santiago and De Coster, Mathieu and Shterionov, Dimitar and Saggion, Horacio
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2478--2487
|
Sign Languages (SLs) are the primary means of communication for at least half a million people in Europe alone. However, the development of SL recognition and translation tools is slowed down by a series of obstacles concerning resource scarcity and standardization issues in the available data. The former challenge relates to the volume of data available for machine learning as well as the time required to collect and process new data. The latter obstacle is linked to the variety of the data, i.e., annotation formats are not unified and vary amongst different resources. The available data formats are often not suitable for machine learning, obstructing the provision of automatic tools based on neural models. In the present paper, we give an overview of these challenges by comparing various SL corpora and SL machine learning datasets. Furthermore, we propose a framework to address the lack of standardization at format level, unify the available resources and facilitate SL research for different languages. Our framework takes ELAN files as inputs and returns textual and visual data ready to train SL recognition and translation models. We present a proof of concept, training neural translation models on the data produced by the proposed framework.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,671 |
inproceedings
|
mertz-etal-2022-low
|
A Low-Cost Motion Capture Corpus in {F}rench {S}ign {L}anguage for Interpreting Iconicity and Spatial Referencing Mechanisms
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.265/
|
Mertz, Cl{\'e}mence and Barreaud, Vincent and Le Naour, Thibaut and Lolive, Damien and Gibet, Sylvie
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2488--2497
|
The automatic translation of sign language videos into transcribed texts is rarely approached in its whole, as it implies to finely model the grammatical mechanisms that govern these languages. The presented work is a first step towards the interpretation of French sign language (LSF) by specifically targeting iconicity and spatial referencing. This paper describes the LSF-SHELVES corpus as well as the original technology that was designed and implemented to collect it. Our goal is to use deep learning methods to circumvent the use of models in spatial referencing recognition. In order to obtain training material with sufficient variability, we designed a light-weight (and low-cost) capture protocol that enabled us to collect data from a large panel of LSF signers. This protocol involves the use of a portable device providing a 3D skeleton, and of a software developed specifically for this application to facilitate the post-processing of handshapes. The LSF-SHELVES includes simple and compound iconic and spatial dynamics, organized in 6 complexity levels, representing a total of 60 sequences signed by 15 LSF signers.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,672 |
inproceedings
|
verhagen-etal-2022-clams
|
The {CLAMS} Platform at Work: Processing Audiovisual Data from the {A}merican Archive of Public Broadcasting
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.266/
|
Verhagen, Marc and Lynch, Kelley and Rim, Kyeongmin and Pustejovsky, James
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2498--2506
|
The Computational Linguistics Applications for Multimedia Services (CLAMS) platform provides access to computational content analysis tools for multimedia material. The version we present here is a robust update of an initial prototype implementation from 2019. The platform now sports a variety of image, video, audio and text processing tools that interact via a common multi-modal representation language named MMIF (Multi-Media Interchange Format). We describe the overall architecture, the MMIF format, some of the tools included in the platform, the process to set up and run a workflow, visualizations included in CLAMS, and evaluate aspects of the platform on data from the American Archive of Public Broadcasting, showing how CLAMS can add metadata to mass-digitized multimedia collections, metadata that are typically only available implicitly in now largely unsearchable digitized media in archives and libraries.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,673 |
inproceedings
|
reardon-etal-2022-bu
|
{BU}-{NE}mo: an Affective Dataset of Gun Violence News
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.267/
|
Reardon, Carley and Paik, Sejin and Gao, Ge and Parekh, Meet and Zhao, Yanling and Guo, Lei and Betke, Margrit and Wijaya, Derry Tanti
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2507--2516
|
Given our society`s increased exposure to multimedia formats on social media platforms, efforts to understand how digital content impacts people`s emotions are burgeoning. As such, we introduce a U.S. gun violence news dataset that contains news headline and image pairings from 840 news articles with 15K high-quality, crowdsourced annotations on emotional responses to the news pairings. We created three experimental conditions for the annotation process: two with a single modality (headline or image only), and one multimodal (headline and image together). In contrast to prior works on affectively-annotated data, our dataset includes annotations on the dominant emotion experienced with the content, the intensity of the selected emotion and an open-ended, written component. By collecting annotations on different modalities of the same news content pairings, we explore the relationship between image and text influence on human emotional response. We offer initial analysis on our dataset, showing the nuanced affective differences that appear due to modality and individual factors such as political leaning and media consumption habits. Our dataset is made publicly available to facilitate future research in affective computing.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,674 |
inproceedings
|
reverdy-etal-2022-roomreader
|
{R}oom{R}eader: A Multimodal Corpus of Online Multiparty Conversational Interactions
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.268/
|
Reverdy, Justine and O{'}Connor Russell, Sam and Duquenne, Louise and Garaialde, Diego and Cowan, Benjamin R. and Harte, Naomi
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2517--2527
|
We present RoomReader, a corpus of multimodal, multiparty conversational interactions in which participants followed a collaborative student-tutor scenario designed to elicit spontaneous speech. The corpus was developed within the wider RoomReader Project to explore multimodal cues of conversational engagement and behavioural aspects of collaborative interaction in online environments. However, the corpus can be used to study a wide range of phenomena in online multimodal interaction. The publicly-shared corpus consists of over 8 hours of video and audio recordings from 118 participants in 30 gender-balanced sessions, in the {\textquotedblleft}in-the-wild{\textquotedblright} online environment of Zoom. The recordings have been edited, synchronised, and fully transcribed. Student participants have been continuously annotated for engagement with a novel continuous scale. We provide questionnaires measuring engagement and group cohesion collected from the annotators, tutors and participants themselves. We also make a range of accompanying data available such as personality tests and behavioural assessments. The dataset and accompanying psychometrics present a rich resource enabling the exploration of a range of downstream tasks across diverse fields including linguistics and artificial intelligence. This could include the automatic detection of student engagement, analysis of group interaction and collaboration in online conversation, and the analysis of conversational behaviours in an online setting.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,675 |
inproceedings
|
sevilla-etal-2022-quevedo
|
Quevedo: Annotation and Processing of Graphical Languages
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.269/
|
Sevilla, Antonio F. G. and D{\'i}az Esteban, Alberto and Lahoz-Bengoechea, Jos{\'e} Mar{\'i}a
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2528--2535
|
In this article, we present Quevedo, a software tool we have developed for the task of automatic processing of graphical languages. These are languages which use images to convey meaning, relying not only on the shape of symbols but also on their spatial arrangement in the page, and relative to each other. When presented in image form, these languages require specialized computational processing which is not the same as usually done either for natural language processing or for artificial vision. Quevedo enables this specialized processing, focusing on a data-based approach. As a command line application and library, it provides features for the collection and management of image datasets, and their machine learning recognition using neural networks and recognizer pipelines. This processing requires careful annotation of the source data, for which Quevedo offers an extensive and visual web-based annotation interface. In this article, we also briefly present a case study centered on the task of SignWriting recognition, the original motivation for writing the software. Quevedo is written in Python, and distributed freely under the Open Software License version 3.0.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,676 |
inproceedings
|
saha-etal-2022-merkel
|
Merkel Podcast Corpus: A Multimodal Dataset Compiled from 16 Years of Angela Merkel`s Weekly Video Podcasts
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.270/
|
Saha, Debjoy and Nayak, Shravan and Baumann, Timo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2536--2540
|
We introduce the Merkel Podcast Corpus, an audio-visual-text corpus in German collected from 16 years of (almost) weekly Internet podcasts of former German chancellor Angela Merkel. To the best of our knowledge, this is the first single speaker corpus in the German language consisting of audio, visual and text modalities of comparable size and temporal extent. We describe the methods used with which we have collected and edited the data which involves downloading the videos, transcripts and other metadata, forced alignment, performing active speaker recognition and face detection to finally curate the single speaker dataset consisting of utterances spoken by Angela Merkel. The proposed pipeline is general and can be used to curate other datasets of similar nature, such as talk show contents. Through various statistical analyses and applications of the dataset in talking face generation and TTS, we show the utility of the dataset. We argue that it is a valuable contribution to the research community, in particular, due to its realistic and challenging material at the boundary between prepared and spontaneous speech.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,677 |
inproceedings
|
mukushev-etal-2022-crowdsourcing
|
Crowdsourcing {K}azakh-{R}ussian {S}ign {L}anguage: {F}luent{S}igners-50
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.271/
|
Mukushev, Medet and Kydyrbekova, Aigerim and Imashev, Alfarabi and Kimmelman, Vadim and Sandygulova, Anara
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2541--2547
|
This paper presents the methodology we used to crowdsource a data collection of a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) created for Sign Language Processing. By involving the Deaf community throughout the research process, we firstly designed a research protocol and then performed an efficient crowdsourcing campaign that resulted in a new FluentSigners-50 dataset. The FluentSigners-50 dataset consists of 173 sentences performed by 50 KRSL signers for 43,250 video samples. Dataset contributors recorded videos in real-life settings on various backgrounds using various devices such as smartphones and web cameras. Therefore, each dataset contribution has a varying distance to the camera, camera angles and aspect ratio, video quality, and frame rates. Additionally, the proposed dataset contains a high degree of linguistic and inter-signer variability and thus is a better training set for recognizing a real-life signed speech. FluentSigners-50 is publicly available at \url{https://krslproject.github.io/fluentsigners-50/}
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,678 |
inproceedings
|
nugues-2022-connecting
|
Connecting a {F}rench Dictionary from the Beginning of the 20th Century to {W}ikidata
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.272/
|
Nugues, Pierre
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2548--2555
|
The Petit Larousse illustr{\'e} is a French dictionary first published in 1905. Its division in two main parts on language and on history and geography corresponds to a major milestone in French lexicography as well as a repository of general knowledge from this period. Although the value of many entries from 1905 remains intact, some descriptions now have a dimension that is more historical than contemporary. They are nonetheless significant to analyze and understand cultural representations from this time. A comparison with more recent information or a verification of these entries would require a tedious manual work. In this paper, we describe a new lexical resource, where we connected all the dictionary entries of the history and geography part to current data sources. For this, we linked each of these entries to a wikidata identifier. Using the wikidata links, we can automate more easily the identification, comparison, and verification of historically-situated representations. We give a few examples on how to process wikidata identifiers and we carried out a small analysis of the entities described in the dictionary to outline possible applications. The resource, i.e. the annotation of 20,245 dictionary entries with wikidata links, is available from GitHub (\url{https://github.com/pnugues/petit_larousse_1905/})
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,679 |
inproceedings
|
egg-kordoni-2022-metaphor
|
Metaphor annotation for {G}erman
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.273/
|
Egg, Markus and Kordoni, Valia
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2556--2562
|
The paper presents current work on a German corpus annotated for metaphor. Metaphors denote entities or situations that are in some sense similar to the literal referent, e.g., when {\textquotedblleftHandschrift{\textquotedblright {\textquoteleftsignature' is used in the sense of {\textquoteleftdistinguishing mark' or the suppression of hopes is introduced by the verb {\textquotedblleftversch{\"utten{\textquotedblright {\textquoteleftbury'. The corpus is part of a project on register, hence, includes material from different registers that represent register variation along a number of important dimensions, but we believe that it is of interest to research on metaphor in general. The corpus extends previous annotation initiatives in that it not only annotates the metaphoric expressions themselves but also their respective relevant contexts that trigger a metaphorical interpretation of the expressions. For the corpus, we developed extended annotation guidelines, which specifically focus not only on the identification of these metaphoric contexts but also analyse in detail specific linguistic challenges for metaphor annotation that emerge due to the grammar of German.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,680 |
inproceedings
|
kutuzov-etal-2022-nordiachange
|
{N}or{D}ia{C}hange: Diachronic Semantic Change Dataset for {N}orwegian
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.274/
|
Kutuzov, Andrey and Touileb, Samia and M{\ae}hlum, Petter and Enstad, Tita and Wittemann, Alexandra
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2563--2572
|
We describe NorDiaChange: the first diachronic semantic change dataset for Norwegian. NorDiaChange comprises two novel subsets, covering about 80 Norwegian nouns manually annotated with graded semantic change over time. Both datasets follow the same annotation procedure and can be used interchangeably as train and test splits for each other. NorDiaChange covers the time periods related to pre- and post-war events, oil and gas discovery in Norway, and technological developments. The annotation was done using the DURel framework and two large historical Norwegian corpora. NorDiaChange is published in full under a permissive licence, complete with raw annotation data and inferred diachronic word usage graphs (DWUGs).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,681 |
inproceedings
|
goncalo-oliveira-2022-exploring
|
Exploring Transformers for Ranking {P}ortuguese Semantic Relations
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.275/
|
Gon{\c{c}}alo Oliveira, Hugo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2573--2582
|
We explored transformer-based language models for ranking instances of Portuguese lexico-semantic relations. Weights were based on the likelihood of natural language sequences that transmitted the relation instances, and expectations were that they would be useful for filtering out noisier instances. However, after analysing the weights, no strong conclusions were taken. They are not correlated with redundancy, but are lower for instances with longer and more specific arguments, which may nevertheless be a consequence of their sensitivity to the frequency of such arguments. They did also not reveal to be useful when computing word similarity with network embeddings. Despite the negative results, we see the reported experiments and insights as another contribution for better understanding transformer language models like BERT and GPT, and we make the weighted instances publicly available for further research.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,682 |
inproceedings
|
ferret-2022-building
|
Building Static Embeddings from Contextual Ones: Is It Useful for Building Distributional Thesauri?
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.276/
|
Ferret, Olivier
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2583--2590
|
While contextual language models are now dominant in the field of Natural Language Processing, the representations they build at the token level are not always suitable for all uses. In this article, we propose a new method for building word or type-level embeddings from contextual models. This method combines the generalization and the aggregation of token representations. We evaluate it for a large set of English nouns from the perspective of the building of distributional thesauri for extracting semantic similarity relations. Moreover, we analyze the differences between static embeddings and type-level embeddings according to features such as the frequency of words or the type of semantic relations these embeddings account for, showing that the properties of these two types of embeddings can be complementary and exploited for further improving distributional thesauri.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,683 |
inproceedings
|
wang-etal-2022-sentence
|
Sentence Selection Strategies for Distilling Word Embeddings from {BERT}
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.277/
|
Wang, Yixiao and Bouraoui, Zied and Espinosa Anke, Luis and Schockaert, Steven
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2591--2600
|
Many applications crucially rely on the availability of high-quality word vectors. To learn such representations, several strategies based on language models have been proposed in recent years. While effective, these methods typically rely on a large number of contextualised vectors for each word, which makes them impractical. In this paper, we investigate whether similar results can be obtained when only a few contextualised representations of each word can be used. To this end, we analyse a range of strategies for selecting the most informative sentences. Our results show that with a careful selection strategy, high-quality word vectors can be learned from as few as 5 to 10 sentences.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,684 |
inproceedings
|
baldissin-etal-2022-diawug
|
{D}ia{WUG}: A Dataset for Diatopic Lexical Semantic Variation in {S}panish
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.278/
|
Baldissin, Gioia and Schlechtweg, Dominik and Schulte im Walde, Sabine
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2601--2609
|
We provide a novel dataset {--} DiaWUG {--} with judgements on diatopic lexical semantic variation for six Spanish variants in Europe and Latin America. In contrast to most previous meaning-based resources and studies on semantic diatopic variation, we collect annotations on semantic relatedness for Spanish target words in their contexts from both a semasiological perspective (i.e., exploring the meanings of a word given its form, thus including polysemy) and an onomasiological perspective (i.e., exploring identical meanings of words with different forms, thus including synonymy). In addition, our novel dataset exploits and extends the existing framework DURel for annotating word senses in context (Erk et al., 2013; Schlechtweg et al., 2018) and the framework-embedded Word Usage Graphs (WUGs) {--} which up to now have mainly be used for semasiological tasks and resources {--} in order to distinguish, visualize and interpret lexical semantic variation of contextualized words in Spanish from these two perspectives, i.e., semasiological and onomasiological language variation.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,685 |
inproceedings
|
chen-hulden-2022-case
|
My Case, For an Adposition: Lexical Polysemy of Adpositions and Case Markers in {F}innish and {L}atin
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.279/
|
Chen, Daniel and Hulden, Mans
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2610--2616
|
Adpositions and case markers contain a high degree of polysemy and participate in unique semantic role configurations. We present a novel application of the SNACS supersense hierarchy to Finnish and Latin data by manually annotating adposition and case marker tokens in Finnish and Latin translations of Chapters IV-V of Le Petit Prince (The Little Prince). We evaluate the computational validity of the semantic role annotation categories by grouping raw, contextualized Multilingual BERT embeddings using k-means clustering.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,686 |
inproceedings
|
breit-etal-2022-wic
|
{W}i{C}-{TSV}-de: {G}erman Word-in-Context Target-Sense-Verification Dataset and Cross-Lingual Transfer Analysis
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.280/
|
Breit, Anna and Revenko, Artem and Blaschke, Narayani
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2617--2625
|
Target Sense Verification (TSV) describes the binary disambiguation task of deciding whether the intended sense of a target word in a context corresponds to a given target sense. In this paper, we introduce WiC-TSV-de, a multi-domain dataset for German Target Sense Verification. While the training and development sets consist of domain-independent instances only, the test set contains domain-bound subsets, originating from four different domains, being Gastronomy, Medicine, Hunting, and Zoology. The domain-bound subsets incorporate adversarial examples such as in-domain ambiguous target senses and context-mixing (i.e., using the target sense in an out-of-domain context) which contribute to the challenging nature of the presented dataset. WiC-TSV-de allows for the development of sense-inventory-independent disambiguation models that can generalise their knowledge for different domain settings. By combining it with the original English WiC-TSV benchmark, we performed monolingual and cross-lingual analysis, where the evaluated baseline models were not able to solve the dataset to a satisfying degree, leaving a big gap to human performance.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,687 |
inproceedings
|
el-boukkouri-etal-2022-train
|
Re-train or Train from Scratch? Comparing Pre-training Strategies of {BERT} in the Medical Domain
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.281/
|
El Boukkouri, Hicham and Ferret, Olivier and Lavergne, Thomas and Zweigenbaum, Pierre
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2626--2633
|
BERT models used in specialized domains all seem to be the result of a simple strategy: initializing with the original BERT and then resuming pre-training on a specialized corpus. This method yields rather good performance (e.g. BioBERT (Lee et al., 2020), SciBERT (Beltagy et al., 2019), BlueBERT (Peng et al., 2019)). However, it seems reasonable to think that training directly on a specialized corpus, using a specialized vocabulary, could result in more tailored embeddings and thus help performance. To test this hypothesis, we train BERT models from scratch using many configurations involving general and medical corpora. Based on evaluations using four different tasks, we find that the initial corpus only has a weak influence on the performance of BERT models when these are further pre-trained on a medical corpus.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,688 |
inproceedings
|
orlando-etal-2022-universal
|
Universal Semantic Annotator: the First Unified {API} for {WSD}, {SRL} and Semantic Parsing
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.282/
|
Orlando, Riccardo and Conia, Simone and Faralli, Stefano and Navigli, Roberto
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2634--2641
|
In this paper, we present the Universal Semantic Annotator (USeA), which offers the first unified API for high-quality automatic annotations of texts in 100 languages through state-of-the-art systems for Word Sense Disambiguation, Semantic Role Labeling and Semantic Parsing. Together, such annotations can be used to provide users with rich and diverse semantic information, help second-language learners, and allow researchers to integrate explicit semantic knowledge into downstream tasks and real-world applications.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,689 |
inproceedings
|
wahle-etal-2022-d3
|
D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.283/
|
Wahle, Jan Philip and Ruas, Terry and Mohammad, Saif and Gipp, Bela
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2642--2651
|
DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15{\%} annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers' abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,690 |
inproceedings
|
roussis-etal-2022-scipar
|
{S}ci{P}ar: A Collection of Parallel Corpora from Scientific Abstracts
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.284/
|
Roussis, Dimitrios and Papavassiliou, Vassilis and Prokopidis, Prokopis and Piperidis, Stelios and Katsouros, Vassilis
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2652--2657
|
This paper presents SciPar, a new collection of parallel corpora created from openly available metadata of bachelor theses, master theses and doctoral dissertations hosted in institutional repositories, digital libraries of universities and national archives. We describe first how we harvested and processed metadata from 86, mainly European, repositories to extract bilingual titles and abstracts, and then how we mined high quality sentence pairs in a wide range of scientific areas and sub-disciplines. In total, the resource includes 9.17 million segment alignments in 31 language pairs and is publicly available via the ELRC-SHARE repository. The bilingual corpora in this collection could prove valuable in various applications, such as cross-lingual plagiarism detection or adapting Machine Translation systems for the translation of scientific texts and academic writing in general, especially for language pairs which include English.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,691 |
inproceedings
|
gavidia-etal-2022-cats
|
{CAT}s are Fuzzy {PET}s: A Corpus and Analysis of Potentially Euphemistic Terms
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.285/
|
Gavidia, Martha and Lee, Patrick and Feldman, Anna and Peng, JIng
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2658--2671
|
Euphemisms have not received much attention in natural language processing, despite being an important element of polite and figurative language. Euphemisms prove to be a difficult topic, not only because they are subject to language change, but also because humans may not agree on what is a euphemism and what is not. Nonetheless, the first step to tackling the issue is to collect and analyze examples of euphemisms. We present a corpus of potentially euphemistic terms (PETs) along with example texts from the GloWbE corpus. Additionally, we present a subcorpus of texts where these PETs are not being used euphemistically, which may be useful for future applications. We also discuss the results of multiple analyses run on the corpus. Firstly, we find that sentiment analysis on the euphemistic texts supports that PETs generally decrease negative and offensive sentiment. Secondly, we observe cases of disagreement in an annotation task, where humans are asked to label PETs as euphemistic or not in a subset of our corpus text examples. We attribute the disagreement to a variety of potential reasons, including if the PET was a commonly accepted term (CAT).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,692 |
inproceedings
|
habash-etal-2022-camel
|
Camel Treebank: An Open Multi-genre {A}rabic Dependency Treebank
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.286/
|
Habash, Nizar and AbuOdeh, Muhammed and Taji, Dima and Faraj, Reem and El Gizuli, Jamila and Kallas, Omar
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2672--2681
|
We present the Camel Treebank (CAMELTB), a 188K word open-source dependency treebank of Modern Standard and Classical Arabic. CAMELTB 1.0 includes 13 sub-corpora comprising selections of texts from pre-Islamic poetry to social media online commentaries, and covering a range of genres from religious and philosophical texts to news, novels, and student essays. The texts are all publicly available (out of copyright, creative commons, or under open licenses). The texts were morphologically tokenized and syntactically parsed automatically, and then manually corrected by a team of trained annotators. The annotations follow the guidelines of the Columbia Arabic Treebank (CATiB) dependency representation. We discuss our annotation process and guideline extensions, and we present some initial observations on lexical and syntactic differences among the annotated sub-corpora. This corpus will be publicly available to support and encourage research on Arabic NLP in general and on new, previously unexplored genres that are of interest to a wider spectrum of researchers, from historical linguistics and digital humanities to computer-assisted language pedagogy.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,693 |
inproceedings
|
sotudeh-etal-2022-mentsum
|
{M}ent{S}um: A Resource for Exploring Summarization of Mental Health Online Posts
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.287/
|
Sotudeh, Sajad and Goharian, Nazli and Young, Zachary
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2682--2692
|
Mental health remains a significant challenge of public health worldwide. With increasing popularity of online platforms, many use the platforms to share their mental health conditions, express their feelings, and seek help from the community and counselors. Some of these platforms, such as Reachout, are dedicated forums where the users register to seek help. Others such as Reddit provide subreddits where the users publicly but anonymously post their mental health distress. Although posts are of varying length, it is beneficial to provide a short, but informative summary for fast processing by the counselors. To facilitate research in summarization of mental health online posts, we introduce Mental Health Summarization dataset, MentSum, containing over 24k carefully selected user posts from Reddit, along with their short user-written summary (called TLDR) in English from 43 mental health subreddits. This domain-specific dataset could be of interest not only for generating short summaries on Reddit, but also for generating summaries of posts on the dedicated mental health forums such as Reachout. We further evaluate both extractive and abstractive state-of-the-art summarization baselines in terms of Rouge scores, and finally conduct an in-depth human evaluation study of both user-written and system-generated summaries, highlighting challenges in this research.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,694 |
inproceedings
|
aumiller-gertz-2022-klexikon
|
Klexikon: A {G}erman Dataset for Joint Summarization and Simplification
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.288/
|
Aumiller, Dennis and Gertz, Michael
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2693--2701
|
Traditionally, Text Simplification is treated as a monolingual translation task where sentences between source texts and their simplified counterparts are aligned for training. However, especially for longer input documents, summarizing the text (or dropping less relevant content altogether) plays an important role in the simplification process, which is currently not reflected in existing datasets. Simultaneously, resources for non-English languages are scarce in general and prohibitive for training new solutions. To tackle this problem, we pose core requirements for a system that can jointly summarize and simplify long source documents. We further describe the creation of a new dataset for joint Text Simplification and Summarization based on German Wikipedia and the German children`s encyclopedia {\textquotedblleft}Klexikon{\textquotedblright}, consisting of almost 2,900 documents. We release a document-aligned version that particularly highlights the summarization aspect, and provide statistical evidence that this resource is well suited to simplification as well. Code and data are available on Github: \url{https://github.com/dennlinger/klexikon}
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,695 |
inproceedings
|
hartl-kruschwitz-2022-applying
|
Applying Automatic Text Summarization for Fake News Detection
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.289/
|
Hartl, Philipp and Kruschwitz, Udo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2702--2713
|
The distribution of fake news is not a new but a rapidly growing problem. The shift to news consumption via social media has been one of the drivers for the spread of misleading and deliberately wrong information, as in addition to its ease of use there is rarely any veracity monitoring. Due to the harmful effects of such fake news on society, the detection of these has become increasingly important. We present an approach to the problem that combines the power of transformer-based language models while simultaneously addressing one of their inherent problems. Our framework, CMTR-BERT, combines multiple text representations, with the goal of circumventing sequential limits and related loss of information the underlying transformer architecture typically suffers from. Additionally, it enables the incorporation of contextual information. Extensive experiments on two very different, publicly available datasets demonstrates that our approach is able to set new state-of-the-art performance benchmarks. Apart from the benefit of using automatic text summarization techniques we also find that the incorporation of contextual information contributes to performance gains.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,696 |
inproceedings
|
meisinger-etal-2022-increasing
|
Increasing {CMDI}`s Semantic Interoperability with schema.org
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.290/
|
Meisinger, Nino and Trippel, Thorsten and Zinn, Claus
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2714--2720
|
The CLARIN Concept Registry (CCR) is the common semantic ground for most CMDI-based profiles to describe language-related resources in the CLARIN universe. While the CCR supports semantic interoperability within this universe, it does not extend beyond it. The flexibility of CMDI, however, allows users to use other term or concept registries when defining their metadata components. In this paper, we describe our use of schema.org, a light ontology used by many parties across disciplines.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,697 |
inproceedings
|
lange-aznar-2022-refco
|
{R}ef{C}o and its Checker: Improving Language Documentation Corpora`s Reusability Through a Semi-Automatic Review Process
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.291/
|
Lange, Herbert and Aznar, Jocelyn
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2721--2729
|
The QUEST (QUality ESTablished) project aims at ensuring the reusability of audio-visual datasets (Wamprechtshammer et al., 2022) by devising quality criteria and curating processes. RefCo (Reference Corpora) is an initiative within QUEST in collaboration with DoReCo (Documentation Reference Corpus, Paschen et al. (2020)) focusing on language documentation projects. Previously, Aznar and Seifart (2020) introduced a set of quality criteria dedicated to documenting fieldwork corpora. Based on these criteria, we establish a semi-automatic review process for existing and work-in-progress corpora, in particular for language documentation. The goal is to improve the quality of a corpus by increasing its reusability. A central part of this process is a template for machine-readable corpus documentation and automatic data verification based on this documentation. In addition to the documentation and automatic verification, the process involves a human review and potentially results in a RefCo certification of the corpus. For each of these steps, we provide guidelines and manuals. We describe the evaluation process in detail, highlight the current limits for automatic evaluation and how the manual review is organized accordingly.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,698 |
inproceedings
|
simon-2022-identification
|
Identification and Analysis of Personification in {H}ungarian: The {P}er{SEC}orp project
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.292/
|
Simon, G{\'a}bor
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2730--2738
|
Despite the recent findings on the conceptual and linguistic organization of personification, we have relatively little knowledge about its lexical patterns and grammatical templates. It is especially true in the case of Hungarian which has remained an understudied language regarding the constructions of figurative meaning generation. The present paper aims to provide a corpus-driven approach to personification analysis in the framework of cognitive linguistics. This approach is based on the building of a semi-automatically processed research corpus (the PerSE corpus) in which personifying linguistic structures are annotated manually. The present test version of the corpus consists of online car reviews written in Hungarian (10468 words altogether): the texts were tokenized, lemmatized, morphologically analyzed, syntactically parsed, and PoS-tagged with the e-magyar NLP tool. For the identification of personifications, the adaptation of the MIPVU protocol was used and combined with additional analysis of semantic relations within personifying multi-word expressions. The paper demonstrates the structure of the corpus as well as the levels of the annotation. Furthermore, it gives an overview of possible data types emerging from the analysis: lexical pattern, grammatical characteristics, and the construction-like behavior of personifications in Hungarian.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,699 |
inproceedings
|
silvano-etal-2022-iso
|
{ISO}-based Annotated Multilingual Parallel Corpus for Discourse Markers
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.293/
|
Silvano, Purifica{\c{c}}{\~a}o and Damova, Mariana and Ole{\v{s}}kevi{\v{c}}ien{\.{e}}, Giedr{\.{e}} Val{\={u}}nait{\.{e}} and Liebeskind, Chaya and Chiarcos, Christian and Trajanov, Dimitar and Truic{\u{a}}, Ciprian-Octavian and Apostol, Elena-Simona and Baczkowska, Anna
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2739--2749
|
Discourse markers carry information about the discourse structure and organization, and also signal local dependencies or epistemological stance of speaker. They provide instructions on how to interpret the discourse, and their study is paramount to understand the mechanism underlying discourse organization. This paper presents a new language resource, an ISO-based annotated multilingual parallel corpus for discourse markers. The corpus comprises nine languages, Bulgarian, Lithuanian, German, European Portuguese, Hebrew, Romanian, Polish, and Macedonian, with English as a pivot language. In order to represent the meaning of the discourse markers, we propose an annotation scheme of discourse relations from ISO 24617-8 with a plug-in to ISO 24617-2 for communicative functions. We describe an experiment in which we applied the annotation scheme to assess its validity. The results reveal that, although some extensions are required to cover all the multilingual data, it provides a proper representation of discourse markers value. Additionally, we report some relevant contrastive phenomena concerning discourse markers interpretation and role in discourse. This first step will allow us to develop deep learning methods to identify and extract discourse relations and communicative functions, and to represent that information as Linguistic Linked Open Data (LLOD).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,700 |
inproceedings
|
gimeno-gomez-martinez-hinarejos-2022-lip
|
{LIP}-{RTVE}: An Audiovisual Database for Continuous {S}panish in the Wild
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.294/
|
Gimeno-G{\'o}mez, David and Mart{\'i}nez-Hinarejos, Carlos-D.
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2750--2758
|
Speech is considered as a multi-modal process where hearing and vision are two fundamentals pillars. In fact, several studies have demonstrated that the robustness of Automatic Speech Recognition systems can be improved when audio and visual cues are combined to represent the nature of speech. In addition, Visual Speech Recognition, an open research problem whose purpose is to interpret speech by reading the lips of the speaker, has been a focus of interest in the last decades. Nevertheless, in order to estimate these systems in the currently Deep Learning era, large-scale databases are required. On the other hand, while most of these databases are dedicated to English, other languages lack sufficient resources. Thus, this paper presents a semi-automatically annotated audiovisual database to deal with unconstrained natural Spanish, providing 13 hours of data extracted from Spanish television. Furthermore, baseline results for both speaker-dependent and speaker-independent scenarios are reported using Hidden Markov Models, a traditional paradigm that has been widely used in the field of Speech Technologies.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,701 |
inproceedings
|
yun-etal-2022-modality
|
Modality Alignment between Deep Representations for Effective Video-and-Language Learning
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.295/
|
Yun, Hyeongu and Kim, Yongil and Jung, Kyomin
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2759--2770
|
Video-and-Language learning, such as video question answering or video captioning, is the next challenge in the deep learning society, as it pursues the way how human intelligence perceives everyday life. These tasks require the ability of multi-modal reasoning which is to handle both visual information and text information simultaneously across time. In this point of view, a cross-modality attention module that fuses video representation and text representation takes a critical role in most recent approaches. However, existing Video-and-Language models merely compute the attention weights without considering the different characteristics of video modality and text modality. Such na {\ensuremath{\ddot{}}}{\i}ve attention module hinders the current models to fully enjoy the strength of cross-modality. In this paper, we propose a novel Modality Alignment method that benefits the cross-modality attention module by guiding it to easily amalgamate multiple modalities. Specifically, we exploit Centered Kernel Alignment (CKA) which was originally proposed to measure the similarity between two deep representations. Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities. Experiments on real-world Video QA tasks demonstrate that our method outperforms conventional multi-modal methods significantly with +3.57{\%} accuracy increment compared to the baseline in a popular benchmark dataset. Additionally, in a synthetic data environment, we show that learning the alignment with our method boosts the performance of the cross-modality attention.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,702 |
inproceedings
|
murat-etal-2022-mutual
|
Mutual Gaze and Linguistic Repetition in a Multimodal Corpus
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.296/
|
Murat, Anais and Koutsombogera, Maria and Vogel, Carl
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2771--2780
|
This paper investigates the correlation between mutual gaze and linguistic repetition, a form of alignment, which we take as evidence of mutual understanding. We focus on a multimodal corpus made of three-party conversations and explore the question of whether mutual gaze events correspond to moments of repetition or non-repetition. Our results, although mainly significant on word unigrams and bigrams, suggest positive correlations between the presence of mutual gaze and the repetitions of tokens, lemmas, or parts-of-speech, but negative correlations when it comes to paired levels of representation (tokens or lemmas associated with their part-of-speech). No compelling correlation is found with duration of mutual gaze. Results are strongest when ignoring punctuation as representations of pauses, intonation, etc. in counting aligned tokens.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,703 |
inproceedings
|
parisse-etal-2022-multidimensional
|
Multidimensional Coding of Multimodal Languaging in Multi-Party Settings
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.297/
|
Parisse, Christophe and Blondel, Marion and Ca{\"et, St{\'ephanie and Danet, Claire and Vincent, Coralie and Morgenstern, Aliyah
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2781--2787
|
In natural language settings, many interactions include more than two speakers, and real-life interpretation is based on all types of information available in all modalities. This constitutes a challenge for corpus-based analyses because the information in the audio and visual channels must be included in the coding. The goal of the DINLANG project is to tackle that challenge and analyze spontaneous interactions in family dinner settings (two adults and two to three children). The families use either French, or LSF (French sign language). Our aim is to compare how participants share language across the range of modalities found in vocal and visual languaging in coordination with dining. In order to pinpoint similarities and differences, we had to find a common coding tool for all situations (variations from one family to another) and modalities. Our coding procedure incorporates the use of the ELAN software. We created a template organized around participants, situations, and modalities, rather than around language forms. Spoken language transcription can be integrated, when it exists, but it is not mandatory. Data that has been created with another software can be injected in ELAN files if it is linked using time stamps. Analyses performed with the coded files rely on ELAN`s structured search functionalities, which allow to achieve fine-grained temporal analyses and which can be completed by using spreadsheets or R language.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,704 |
inproceedings
|
kyjanek-etal-2022-constructing
|
Constructing a Lexical Resource of {R}ussian Derivational Morphology
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.298/
|
Kyj{\'a}nek, Luk{\'a}{\v{s}} and Lyashevskaya, Olga and Nedoluzhko, Anna and Vodolazsky, Daniil and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2788--2797
|
Words of any language are to some extent related thought the ways they are formed. For instance, the verb {\textquoteleft}exempl-ify' and the noun {\textquoteleft}example-s' are both based on the word {\textquoteleft}example', but the verb is derived from it, while the noun is inflected. In Natural Language Processing of Russian, the inflection is satisfactorily processed; however, there are only a few machine-trackable resources that capture derivations even though Russian has both of these morphological processes very rich. Therefore, we devote this paper to improving one of the methods of constructing such resources and to the application of the method to a Russian lexicon, which results in the creation of the largest lexical resource of Russian derivational relations. The resulting database dubbed DeriNet.RU includes more than 300 thousand lexemes connected with more than 164 thousand binary derivational relations. To create such data, we combined the existing machine-learning methods that we improved to manage this goal. The whole approach is evaluated on our newly created data set of manual, parallel annotation. The resulting DeriNet.RU is freely available under an open license agreement.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,705 |
inproceedings
|
khishigsuren-etal-2022-using
|
Using Linguistic Typology to Enrich Multilingual Lexicons: the Case of Lexical Gaps in Kinship
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.299/
|
Khishigsuren, Temuulen and Bella, G{\'a}bor and Batsuren, Khuyagbaatar and Freihat, Abed Alhakim and Chandran Nair, Nandu and Ganbold, Amarsanaa and Khalilia, Hadi and Chandrashekar, Yamini and Giunchiglia, Fausto
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2798--2807
|
This paper describes a method to enrich lexical resources with content relating to linguistic diversity, based on knowledge from the field of lexical typology. We capture the phenomenon of diversity through the notion of lexical gap and use a systematic method to infer gaps semi-automatically on a large scale, which we demonstrate on the kinship domain. The resulting free diversity-aware terminological resource consists of 198 concepts, 1,911 words, and 37,370 gaps in 699 languages. We see great potential in the use of resources such as ours for the improvement of a variety of cross-lingual NLP tasks, which we illustrate through an application in the evaluation of machine translation systems.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,706 |
inproceedings
|
paikens-etal-2022-towards
|
Towards {L}atvian {W}ord{N}et
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.300/
|
Paikens, Peteris and Grasmanis, Mikus and Klints, Agute and Lokmane, Ilze and Pretkalni{\c{n}}a, Lauma and Rituma, Laura and St{\={a}}de, Madara and Strankale, Laine
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2808--2815
|
In this paper we describe our current work on creating a WordNet for Latvian based on the principles of the Princeton WordNet. The chosen methodology for word sense definition and sense linking is based on corpus evidence and the existing Tezaurs.lv online dictionary, ensuring a foundation that fits the Latvian language usage and existing linguistic tradition. We cover a wide set of semantic relations, including gradation sets. Currently the dataset consists of 6432 words linked in 5528 synsets, out of which 2717 synsets are considered fully completed as they have all the outgoing semantic links annotated, annotated with corpus examples for each sense and links to the English Princeton WordNet.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,707 |
inproceedings
|
liu-etal-2022-building
|
Building Sentiment Lexicons for {M}ainland {S}candinavian Languages Using Machine Translation and Sentence Embeddings
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.301/
|
Liu, Peng and Marco, Cristina and Gulla, Jon Atle
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2816--2825
|
This paper presents a simple but effective method to build sentiment lexicons for the three Mainland Scandinavian languages: Danish, Norwegian and Swedish. This method benefits from the English Sentiwordnet and a thesaurus in one of the target languages. Sentiment information from the English resource is mapped to the target languages by using machine translation and similarity measures based on sentence embeddings. A number of experiments with Scandinavian languages are performed in order to determine the best working sentence embedding algorithm for this task. A careful extrinsic evaluation on several datasets yields state-of-the-art results using a simple rule-based sentiment analysis algorithm. The resources are made freely available under an MIT License.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,708 |
inproceedings
|
nimb-etal-2022-thesaurus
|
A Thesaurus-based Sentiment Lexicon for {D}anish: The {D}anish Sentiment Lexicon
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.302/
|
Nimb, Sanni and Olsen, Sussi and Pedersen, Bolette and Troelsg{\r{a}}rd, Thomas
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2826--2832
|
This paper describes how a newly published Danish sentiment lexicon with a high lexical coverage was compiled by use of lexicographic methods and based on the links between groups of words listed in semantic order in a thesaurus and the corresponding word sense descriptions in a comprehensive monolingual dictionary. The overall idea was to identify negative and positive sections in a thesaurus, extract the words from these sections and combine them with the dictionary information via the links. The annotation task of the dataset included several steps, and was based on the comparison of synonyms and near synonyms within a semantic field. In the cases where one of the words were included in the smaller Danish sentiment lexicon AFINN, its value there was used as inspiration and expanded to the synonyms when appropriate. In order to obtain a more practical lexicon with overall polarity values at lemma level, all the senses of the lemma were afterwards compared, taking into consideration dictionary information such as usage, style and frequency. The final lexicon contains 13,859 Danish polarity lemmas and includes morphological information. It is freely available at \url{https://github.com/dsldk/danish-sentiment-lexicon} (licence CC-BY-SA 4.0 International).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,709 |
inproceedings
|
chandran-nair-etal-2022-indoukc
|
{I}ndo{UKC}: A Concept-Centered {I}ndian Multilingual Lexical Resource
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.303/
|
Chandran Nair, Nandu and Velayuthan, Rajendran S. and Chandrashekar, Yamini and Bella, G{\'a}bor and Giunchiglia, Fausto
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2833--2840
|
We introduce the IndoUKC, a new multilingual lexical database comprised of eighteen Indian languages, with a focus on formally capturing words and word meanings specific to Indian languages and cultures. The IndoUKC reuses content from the existing IndoWordNet resource while providing a new model for the cross-lingual mapping of lexical meanings that allows for a richer, diversity-aware representation. Accordingly, beyond a thorough syntactic and semantic cleaning, the IndoWordNet lexical content has been thoroughly remodeled in order to allow a more precise expression of language-specific meaning. The resulting database is made available both for browsing through a graphical web interface and for download through the LiveLanguage data catalogue.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,710 |
inproceedings
|
kim-etal-2022-korean
|
{K}orean Language Modeling via Syntactic Guide
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.304/
|
Kim, Hyeondey and Kim, Seonhoon and Kang, Inho and Kwak, Nojun and Fung, Pascale
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2841--2849
|
While pre-trained language models play a vital role in modern language processing tasks, but not every language can benefit from them. Most existing research on pre-trained language models focuses primarily on widely-used languages such as English, Chinese, and Indo-European languages. Additionally, such schemes usually require extensive computational resources alongside a large amount of data, which is infeasible for less-widely used languages. We aim to address this research niche by building a language model that understands the linguistic phenomena in the target language which can be trained with low-resources. In this paper, we discuss Korean language modeling, specifically methods for language representation and pre-training methods. With our Korean-specific language representation, we are able to build more powerful language models for Korean understanding, even with fewer resources. The paper proposes chunk-wise reconstruction of the Korean language based on a widely used transformer architecture and bidirectional language representation. We also introduce morphological features such as Part-of-Speech (PoS) into the language understanding by leveraging such information during the pre-training. Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,711 |
inproceedings
|
zirikly-etal-2022-whole
|
A Whole-Person Function Dictionary for the Mobility, Self-Care and Domestic Life Domains: a Seedset Expansion Approach
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.305/
|
Zirikly, Ayah and Desmet, Bart and Porcino, Julia and Camacho Maldonado, Jonathan and Ho, Pei-Shu and Jimenez Silva, Rafael and Sacco, Maryanne
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2850--2855
|
Whole-person functional limitations in the areas of mobility, self-care and domestic life affect a majority of individuals with disabilities. Detecting, recording and monitoring such limitations would benefit those individuals, as well as research on whole-person functioning and general public health. Dictionaries of terms related to whole-person function would enable automated identification and extraction of relevant information. However, no such terminologies currently exist, due in part to a lack of standardized coding and their availability mainly in free text clinical notes. In this paper, we introduce terminologies of whole-person function in the domains of mobility, self-care and domestic life, built and evaluated using a small set of manually annotated clinical notes, which provided a seedset that was expanded using a mix of lexical and deep learning approaches.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,712 |
inproceedings
|
giouli-etal-2022-placing
|
Placing multi-modal, and multi-lingual Data in the Humanities Domain on the Map: the Mythotopia Geo-tagged Corpus
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.306/
|
Giouli, Voula and Vacalopoulou, Anna and Sidiropoulos, Nikolaos and Flouda, Christina and Doupas, Athanasios and Giannopoulos, Giorgos and Bikakis, Nikos and Kaffes, Vassilis and Stainhaouer, Gregory
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2856--2864
|
The paper gives an account of an infrastructure that will be integrated into a platform aimed at providing a multi-faceted experience to visitors of Northern Greece using mythology as a starting point. This infrastructure comprises a multi-lingual and multi-modal corpus (i.e., a corpus of textual data supplemented with images, and video) that belongs to the humanities domain along with a dedicated database (content management system) with advanced indexing, linking and search functionalities. We will present the corpus itself focusing on the content, the methodology adopted for its development, and the steps taken towards rendering it accessible via the database in a way that also facilitates useful visualizations. In this context, we tried to address three main challenges: (a) to add a novel annotation layer, namely geotagging, (b) to ensure the long-term maintenance of and accessibility to the highly heterogeneous primary data {--} even after the life cycle of the current project {--} by adopting a metadata schema that is compatible to existing standards; and (c) to render the corpus a useful resource to scholarly research in the digital humanities by adding a minimum set of linguistic annotations.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,713 |
inproceedings
|
ohya-2022-architecture
|
An Architecture of resolving a multiple link path in a standoff-style data format to enhance the mobility of language resources
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.307/
|
Ohya, Kazushi
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2865--2873
|
The present data formats proposed by authentic organizations are based on a so-called standoff-style data format in XML, which represents a semantic data model through an instance structure and a link structure. However, this type of data formats intended to enhance the power of representation of an XML format injures the mobility of data because an abstract data structure denoted by multiple link paths is hard to be converted into other data structures. This difficulty causes a problem in the reuse of data to convert into other data formats especially in a personal data management environment. In this paper, in order to compensate for the drawback, we propose a new concept of transforming a link structure to an instance structure on a new marked-up scheme. This approach to language data brings a new architecture of language data management to realize a personal data management environment in daily and long-life use.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,714 |
inproceedings
|
romberg-etal-2022-corpus
|
A Corpus of {G}erman Citizen Contributions in Mobility Planning: Supporting Evaluation Through Multidimensional Classification
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.308/
|
Romberg, Julia and Mark, Laura and Escher, Tobias
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2874--2883
|
Political authorities in democratic countries regularly consult the public in order to allow citizens to voice their ideas and concerns on specific issues. When trying to evaluate the (often large number of) contributions by the public in order to inform decision-making, authorities regularly face challenges due to restricted resources. We identify several tasks whose automated support can help in the evaluation of public participation. These are i) the recognition of arguments, more precisely premises and their conclusions, ii) the assessment of the concreteness of arguments, iii) the detection of textual descriptions of locations in order to assign citizens' ideas to a spatial location, and iv) the thematic categorization of contributions. To enable future research efforts to develop techniques addressing these four tasks, we introduce the CIMT PartEval Corpus, a new publicly-available German-language corpus that includes several thousand citizen contributions from six mobility-related planning processes in five German municipalities. The corpus provides annotations for each of these tasks which have not been available in German for the domain of public participation before either at all or in this scope and variety.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,715 |
inproceedings
|
lesage-etal-2022-overlooked
|
Overlooked Data in Typological Databases: What Grambank Teaches Us About Gaps in Grammars
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.309/
|
Lesage, Jakob and Haynie, Hannah J. and Skirg{\r{a}}rd, Hedvig and Weber, Tobias and Witzlack-Makarevich, Alena
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2884--2890
|
Typological databases can contain a wealth of information beyond the collection of linguistic properties across languages. This paper shows how information often overlooked in typological databases can inform the research community about the state of description of the world`s languages. We illustrate this using Grambank, a morphosyntactic typological database covering 2,467 language varieties and based on 3,951 grammatical descriptions. We classify and quantify the comments that accompany coded values in Grambank. We then aggregate these comments and the coded values to derive a level of description for 17 grammatical domains that Grambank covers (negation, adnominal modification, participant marking, tense, aspect, etc.). We show that the description level of grammatical domains varies across space and time. Information about gaps and uncertainties in the descriptive knowledge of grammatical domains within and across languages is essential for a correct analysis of data in typological databases and for the study of grammatical diversity more generally. When collected in a database, such information feeds into disciplines that focus on primary data collection, such as grammaticography and language documentation.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,716 |
inproceedings
|
mccarthy-dore-2022-hong
|
{H}ong {K}ong: Longitudinal and Synchronic Characterisations of Protest News between 1998 and 2020
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.310/
|
McCarthy, Arya D. and Dore, Giovanna Maria Dora
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2891--2900
|
This paper showcases the utility and timeliness of the Hong Kong Protest News Dataset, a highly curated collection of news articles from diverse news sources, to investigate longitudinal and synchronic news characterisations of protests in Hong Kong between 1998 and 2020. The properties of the dataset enable us to apply natural language processing to its 4522 articles and thereby study patterns of journalistic practice across newspapers. This paper sheds light on whether depth and/or manner of reporting changed over time, and if so, in what ways, or in response to what. In its focus and methodology, this paper helps bridge the gap between {\textquotedblleft}validity-focused methodological debates{\textquotedblright} and the use of computational methods of analysis in the social sciences.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,717 |
inproceedings
|
volk-etal-2022-nunc
|
Nunc profana tractemus. Detecting Code-Switching in a Large Corpus of 16th Century Letters
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.311/
|
Volk, Martin and Fischer, Lukas and Scheurer, Patricia and Schroffenegger, Bernard Silvan and Schwitter, Raphael and Str{\"obel, Phillip and Suter, Benjamin
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2901--2908
|
This paper is based on a collection of 16th century letters from and to the Zurich reformer Heinrich Bullinger. Around 12,000 letters of this exchange have been preserved, out of which 3100 have been professionally edited, and another 5500 are available as provisional transcriptions. We have investigated code-switching in these 8600 letters, first on the sentence-level and then on the word-level. In this paper we give an overview of the corpus and its language mix (mostly Early New High German and Latin, but also French, Greek, Italian and Hebrew). We report on our experiences with a popular language identifier and present our results when training an alternative identifier on a very small training corpus of only 150 sentences per language. We use the automatically labeled sentences in order to bootstrap a word-based language classifier which works with high accuracy. Our research around the corpus building and annotation involves automatic handwritten text recognition, text normalisation for ENH German, and machine translation from medieval Latin into modern German.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,718 |
inproceedings
|
mikulova-etal-2022-quality
|
Quality and Efficiency of Manual Annotation: Pre-annotation Bias
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.312/
|
Mikulov{\'a}, Marie and Straka, Milan and {\v{S}}t{\v{e}}p{\'a}nek, Jan and {\v{S}}t{\v{e}}p{\'a}nkov{\'a}, Barbora and Hajic, Jan
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2909--2918
|
This paper presents an analysis of annotation using an automatic pre-annotation for a mid-level annotation complexity task - dependency syntax annotation. It compares the annotation efforts made by annotators using a pre-annotated version (with a high-accuracy parser) and those made by fully manual annotation. The aim of the experiment is to judge the final annotation quality when pre-annotation is used. In addition, it evaluates the effect of automatic linguistically-based (rule-formulated) checks and another annotation on the same data available to the annotators, and their influence on annotation quality and efficiency. The experiment confirmed that the pre-annotation is an efficient tool for faster manual syntactic annotation which increases the consistency of the resulting annotation without reducing its quality.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,719 |
inproceedings
|
ocal-etal-2022-comprehensive
|
A Comprehensive Evaluation and Correction of the {T}ime{B}ank Corpus
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.313/
|
Ocal, Mustafa and Radas, Antonela and Hummer, Jared and Megerdoomian, Karine and Finlayson, Mark
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2919--2927
|
TimeML is an annotation scheme for capturing temporal information in text. The developers of TimeML built the TimeBank corpus to both validate the scheme and provide a rich dataset of events, temporal expressions, and temporal relationships for training and testing temporal analysis systems. In our own work we have been developing methods aimed at TimeML graphs for detecting (and eventually automatically correcting) temporal inconsistencies, extracting timelines, and assessing temporal indeterminacy. In the course of this investigation we identified numerous previously unrecognized issues in the TimeBank corpus, including multiple violations of TimeML annotation guide rules, incorrectly disconnected temporal graphs, as well as inconsistent, redundant, missing, or otherwise incorrect annotations. We describe our methods for detecting and correcting these problems, which include: (a) automatic guideline checking (109 violations); (b) automatic inconsistency checking (65 inconsistent files); (c) automatic disconnectivity checking (625 incorrect breakpoints); and (d) manual comparison with the output of state-of-the-art automatic annotators to identify missing annotations (317 events, 52 temporal expressions). We provide our code as well as a set of patch files that can be applied to the TimeBank corpus to produce a corrected version for use by other researchers in the field.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,720 |
inproceedings
|
tripodi-etal-2022-evaluating
|
Evaluating Multilingual Sentence Representation Models in a Real Case Scenario
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.314/
|
Tripodi, Rocco and Blloshmi, Rexhina and Levis Sullam, Simon
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2928--2939
|
In this paper, we present an evaluation of sentence representation models on the paraphrase detection task. The evaluation is designed to simulate a real-world problem of plagiarism and is based on one of the most important cases of forgery in modern history: the so-called {\textquotedblleft}Protocols of the Elders of Zion{\textquotedblright}. The sentence pairs for the evaluation are taken from the infamous forged text {\textquotedblleft}Protocols of the Elders of Zion{\textquotedblright} (Protocols) by unknown authors; and by {\textquotedblleft}Dialogue in Hell between Machiavelli and Montesquieu{\textquotedblright} by Maurice Joly. Scholars have demonstrated that the first text plagiarizes from the second, indicating all the forged parts on qualitative grounds. Following this evidence, we organized the rephrased texts and asked native speakers to quantify the level of similarity between each pair. We used this material to evaluate sentence representation models in two languages: English and French, and on three tasks: similarity correlation, paraphrase identification, and paraphrase retrieval. Our evaluation aims at encouraging the development of benchmarks based on real-world problems, as a means to prevent problems connected to AI hypes, and to use NLP technologies for social good. Through our evaluation, we are able to confirm that the infamous Protocols are actually a plagiarized text but, as we will show, we encounter several problems connected with the convoluted nature of the task, that is very different from the one reported in standard benchmarks of paraphrase detection and sentence similarity. Code and data available at \url{https://github.com/roccotrip/protocols}.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,721 |
inproceedings
|
baledent-etal-2022-validity
|
Validity, Agreement, Consensuality and Annotated Data Quality
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.315/
|
Baledent, Ana{\"elle and Mathet, Yann and Widl{\"ocher, Antoine and Couronne, Christophe and Manguin, Jean-Luc
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2940--2948
|
Reference annotated (or gold-standard) datasets are required for various common tasks such as training for machine learning systems or system validation. They are necessary to analyse or compare occurrences or items annotated by experts, or to compare objects resulting from any computational process to objects annotated by experts. But, even if reference annotated gold-standard corpora are required, their production is known as a difficult problem, from both a theoretical and practical point of view. Many studies devoted to theses issues conclude that multi-annotation is most of the time a necessity. That inter-annotator agreement measure, which is required to check the reliability of data and the reproducibility of an annotation task, and thus to establish a gold standard, is another thorny problem. Fine analysis of available metrics for this specific task then becomes essential. Our work is part of this effort and more precisely focuses on several problems, which are rarely discussed, although they are intrinsically linked with the interpretation of metrics. In particular, we focus here on the complex relations between agreement and reference (of which agreement among annotators is supposed to be an indicator), and the emergence of consensus. We also introduce the notion of consensuality as another relevant indicator.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,722 |
inproceedings
|
mdhaffar-etal-2022-impact
|
Impact Analysis of the Use of Speech and Language Models Pretrained by Self-Supersivion for Spoken Language Understanding
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.316/
|
Mdhaffar, Salima and Pelloin, Valentin and Caubri{\`ere, Antoine and Laperriere, Ga{\"elle and Ghannay, Sahar and Jabaian, Bassam and Camelin, Nathalie and Est{\`eve, Yannick
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2949--2956
|
Pretrained models through self-supervised learning have been recently introduced for both acoustic and language modeling. Applied to spoken language understanding tasks, these models have shown their great potential by improving the state-of-the-art performances on challenging benchmark datasets. In this paper, we present an error analysis reached by the use of such models on the French MEDIA benchmark dataset, known as being one of the most challenging benchmarks for the slot filling task among all the benchmarks accessible to the entire research community. One year ago, the state-of-art system reached a Concept Error Rate (CER) of 13.6{\%} through the use of a end-to-end neural architecture. Some months later, a cascade approach based on the sequential use of a fine-tuned wav2vec2.0 model and a fine-tuned BERT model reaches a CER of 11.2{\%}. This significant improvement raises questions about the type of errors that remain difficult to treat, but also about those that have been corrected using these models pre-trained through self-supervision learning on a large amount of data. This study brings some answers in order to better understand the limits of such models and open new perspectives to continue improving the performance.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,723 |
inproceedings
|
kurihara-etal-2022-jglue
|
{JGLUE}: {J}apanese General Language Understanding Evaluation
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.317/
|
Kurihara, Kentaro and Kawahara, Daisuke and Shibata, Tomohide
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2957--2966
|
To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,724 |
inproceedings
|
akhlaghi-etal-2022-using
|
Using the {LARA} Little Prince to compare human and {TTS} audio quality
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.318/
|
Akhlaghi, Elham and Au{\dhunard{\'ottir, Ingibj{\"org I{\dha and B{\k{aczkowska, Anna and B{\'edi, Branislav and Beedar, Hakeem and Berthelsen, Harald and Chua, Cathy and Cucchiarin, Catia and Habibi, Hanieh and Horv{\'athov{\'a, Ivana and Ikeda, Junta and Maizonniaux, Christ{\`ele and N{\'i Chiar{\'ain, Neasa and Raheb, Chadi and Rayner, Manny and Sloan, John and Tsourakis, Nikos and Yao, Chunlin
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2967--2975
|
A popular idea in Computer Assisted Language Learning (CALL) is to use multimodal annotated texts, with annotations typically including embedded audio and translations, to support L2 learning through reading. An important question is how to create good quality audio, which can be done either through human recording or by a Text-To-Speech (TTS) engine. We may reasonably expect TTS to be quicker and easier, but human to be of higher quality. Here, we report a study using the open source LARA platform and ten languages. Samples of audio totalling about five minutes, representing the same four passages taken from LARA versions of Saint-Exup{\`e}ry`s {\textquotedblleft}Le petit prince{\textquotedblright}, were provided for each language in both human and TTS form; the passages were chosen to instantiate the 2x2 cross product of the conditions dialogue, not-dialogue and humour, not-humour. 251 subjects used a web form to compare human and TTS versions of each item and rate the voices as a whole. For the three languages where TTS did best, English, French and Irish, the evidence from this study and the previous one it extended suggest that TTS audio is now pedagogically adequate and roughly comparable with a non-professional human voice in terms of exemplifying correct pronunciation and prosody. It was however still judged substantially less natural and less pleasant to listen to. No clear evidence was found to support the hypothesis that dialogue and humour pose special problems for TTS. All data and software will be made freely available.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,725 |
inproceedings
|
emmery-etal-2022-cyberbullying
|
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.319/
|
Emmery, Chris and K{\'a}d{\'a}r, {\'A}kos and Chrupa{\l}a, Grzegorz and Daelemans, Walter
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2976--2988
|
A limited amount of studies investigates the role of model-agnostic adversarial behavior in toxic content classification. As toxicity classifiers predominantly rely on lexical cues, (deliberately) creative and evolving language-use can be detrimental to the utility of current corpora and state-of-the-art models when they are deployed for content moderation. The less training data is available, the more vulnerable models might become. This study is, to our knowledge, the first to investigate the effect of adversarial behavior and augmentation for cyberbullying detection. We demonstrate that model-agnostic lexical substitutions significantly hurt classifier performance. Moreover, when these perturbed samples are used for augmentation, we show models become robust against word-level perturbations at a slight trade-off in overall task performance. Augmentations proposed in prior work on toxicity prove to be less effective. Our results underline the need for such evaluations in online harm areas with small corpora.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,726 |
inproceedings
|
ellison-same-2022-constructing
|
Constructing Distributions of Variation in Referring Expression Type from Corpora for Model Evaluation
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.320/
|
Ellison, T. Mark and Same, Fahime
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2989--2997
|
The generation of referring expressions (REs) is a non-deterministic task. However, the algorithms for the generation of REs are standardly evaluated against corpora of written texts which include only one RE per each reference. Our goal in this work is firstly to reproduce one of the few studies taking the distributional nature of the RE generation into account. We add to this work, by introducing a method for exploring variation in human RE choice on the basis of longitudinal corpora - substantial corpora with a single human judgement (in the process of composition) per RE. We focus on the prediction of RE types, proper name, description and pronoun. We compare evaluations made against distributions over these types with evaluations made against parallel human judgements. Our results show agreement in the evaluation of learning algorithms against distributions constructed from parallel human evaluations and from longitudinal data.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,727 |
inproceedings
|
perevalov-etal-2022-knowledge
|
Knowledge Graph Question Answering Leaderboard: A Community Resource to Prevent a Replication Crisis
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.321/
|
Perevalov, Aleksandr and Yan, Xi and Kovriguina, Liubov and Jiang, Longquan and Both, Andreas and Usbeck, Ricardo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
2998--3007
|
Data-driven systems need to be evaluated to establish trust in the scientific approach and its applicability. In particular, this is true for Knowledge Graph (KG) Question Answering (QA), where complex data structures are made accessible via natural-language interfaces. Evaluating the capabilities of these systems has been a driver for the community for more than ten years while establishing different KGQA benchmark datasets. However, comparing different approaches is cumbersome. The lack of existing and curated leaderboards leads to a missing global view over the research field and could inject mistrust into the results. In particular, the latest and most-used datasets in the KGQA community, LC-QuAD and QALD, miss providing central and up-to-date points of trust. In this paper, we survey and analyze a wide range of evaluation results with significant coverage of 100 publications and 98 systems from the last decade. We provide a new central and open leaderboard for any KGQA benchmark dataset as a focal point for the community - \url{https://kgqa.github.io/leaderboard/}. Our analysis highlights existing problems during the evaluation of KGQA systems. Thus, we will point to possible improvements for future evaluations.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,728 |
inproceedings
|
takase-okazaki-2022-multi
|
Multi-Task Learning for Cross-Lingual Abstractive Summarization
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.322/
|
Takase, Sho and Okazaki, Naoaki
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3008--3016
|
We present a multi-task learning framework for cross-lingual abstractive summarization to augment training data. Recent studies constructed pseudo cross-lingual abstractive summarization data to train their neural encoder-decoders. Meanwhile, we introduce existing genuine data such as translation pairs and monolingual abstractive summarization data into training. Our proposed method, Transum, attaches a special token to the beginning of the input sentence to indicate the target task. The special token enables us to incorporate the genuine data into the training data easily. The experimental results show that Transum achieves better performance than the model trained with only pseudo cross-lingual summarization data. In addition, we achieve the top ROUGE score on Chinese-English and Arabic-English abstractive summarization. Moreover, Transum also has a positive effect on machine translation. Experimental results indicate that Transum improves the performance from the strong baseline, Transformer, in Chinese-English, Arabic-English, and English-Japanese translation datasets.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,729 |
inproceedings
|
castilho-2022-much
|
How Much Context Span is Enough? Examining Context-Related Issues for Document-level {MT}
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.323/
|
Castilho, Sheila
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3017--3025
|
This paper analyses how much context span is necessary to solve different context-related issues, namely, reference, ellipsis, gender, number, lexical ambiguity, and terminology when translating from English into Portuguese. We use the DELA corpus, which consists of 60 documents and six different domains (subtitles, literary, news, reviews, medical, and legislation). We find that the shortest context span to disambiguate issues can appear in different positions in the document including preceding, following, global, world knowledge. Moreover, the average length depends on the issue types as well as the domain. Moreover, we show that the standard approach of relying on only two preceding sentences as context might not be enough depending on the domain and issue types.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,730 |
inproceedings
|
gete-etal-2022-tando
|
{TANDO}: A Corpus for Document-level Machine Translation
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.324/
|
Gete, Harritxu and Etchegoyhen, Thierry and Ponce, David and Labaka, Gorka and Aranberri, Nora and Corral, Ander and Saralegi, Xabier and Ellakuria, Igor and Martin, Maite
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3026--3037
|
Document-level Neural Machine Translation aims to increase the quality of neural translation models by taking into account contextual information. Properly modelling information beyond the sentence level can result in improved machine translation output in terms of coherence, cohesion and consistency. Suitable corpora for context-level modelling are necessary to both train and evaluate context-aware systems, but are still relatively scarce. In this work we describe TANDO, a document-level corpus for the under-resourced Basque-Spanish language pair, which we share with the scientific community. The corpus is composed of parallel data from three different domains and has been prepared with context-level information. Additionally, the corpus includes contrastive test sets for fine-grained evaluations of gender and register contextual phenomena on both source and target language sides. To establish the usefulness of the corpus, we trained and evaluated baseline Transformer models and context-aware variants based on context concatenation. Our results indicate that the corpus is suitable for fine-grained evaluation of document-level machine translation systems.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,731 |
inproceedings
|
de-gibert-bonet-etal-2022-unsupervised
|
Unsupervised Machine Translation in Real-World Scenarios
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.325/
|
de Gibert Bonet, Ona and Goenaga, Iakes and Armengol-Estap{\'e}, Jordi and Perez-de-Vi{\~n}aspre, Olatz and Parra Escart{\'i}n, Carla and Sanchez, Marina and Pinnis, M{\={a}}rcis and Labaka, Gorka and Melero, Maite
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3038--3047
|
In this work, we present the work that has been carried on in the MT4All CEF project and the resources that it has generated by leveraging recent research carried out in the field of unsupervised learning. In the course of the project 18 monolingual corpora for specific domains and languages have been collected, and 12 bilingual dictionaries and translation models have been generated. As part of the research, the unsupervised MT methodology based only on monolingual corpora (Artetxe et al., 2017) has been tested on a variety of languages and domains. Results show that in specialised domains, when there is enough monolingual in-domain data, unsupervised results are comparable to those of general domain supervised translation, and that, at any rate, unsupervised techniques can be used to boost results whenever very little data is available.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,732 |
inproceedings
|
ashida-etal-2022-covid
|
{COVID}-19 Mythbusters in World Languages
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.326/
|
Ashida, Mana and Kim, Jin-Dong and Lee, Seunghun
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3048--3055
|
This paper introduces a multi-lingual database containing translated texts of COVID-19 mythbusters. The database has translations into 115 languages as well as the original English texts, of which the original texts are published by World Health Organization (WHO). This paper then presents preliminary analyses on latin-alphabet-based texts to see the potential of the database as a resource for multilingual linguistic analyses. The analyses on latin-alphabet-based texts gave interesting insights into the resource. While the amount of translated texts in each language was small, character bi-grams with normalization (lowercasing and removal of diacritics) was turned out to be an effective proxy for measuring the similarity of the languages, and the affinity ranking of language pairs could be obtained. Additionally, the hierarchical clustering analysis is performed using the character bigram overlap ratio of every possible pair of languages. The result shows the cluster of Germanic languages, Romance languages, and Southern Bantu languages. In sum, the multilingual database not only offers fixed set of materials in numerous languages, but also serves as a preliminary tool to identify the language family using text-based similarity measure of bigram overlap ratio.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,733 |
inproceedings
|
armengol-estape-etal-2022-multilingual
|
On the Multilingual Capabilities of Very Large-Scale {E}nglish Language Models
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.327/
|
Armengol-Estap{\'e}, Jordi and de Gibert Bonet, Ona and Melero, Maite
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3056--3068
|
Generative Pre-trained Transformers (GPTs) have recently been scaled to unprecedented sizes in the history of machine learning. These models, solely trained on the language modeling objective, have been shown to exhibit outstanding zero, one, and few-shot learning capabilities in a number of different tasks. Nevertheless, aside from anecdotal experiences, little is known regarding their multilingual capabilities, given the fact that the pre-training corpus is almost entirely composed of English text. In this work, we investigate its potential and limits in three tasks: extractive question-answering, text summarization and natural language generation for five different languages, as well as the effect of scale in terms of model size. Our results show that GPT-3 can be almost as useful for many languages as it is for English, with room for improvement if optimization of the tokenization is addressed.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,734 |
inproceedings
|
karakanta-etal-2022-evaluating
|
Evaluating Subtitle Segmentation for End-to-end Generation Systems
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.328/
|
Karakanta, Alina and Buet, Fran{\c{c}}ois and Cettolo, Mauro and Yvon, Fran{\c{c}}ois
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3069--3078
|
Subtitles appear on screen as short pieces of text, segmented based on formal constraints (length) and syntactic/semantic criteria. Subtitle segmentation can be evaluated with sequence segmentation metrics against a human reference. However, standard segmentation metrics cannot be applied when systems generate outputs different than the reference, e.g. with end-to-end subtitling systems. In this paper, we study ways to conduct reference-based evaluations of segmentation accuracy irrespective of the textual content. We first conduct a systematic analysis of existing metrics for evaluating subtitle segmentation. We then introduce Sigma, a Subtitle Segmentation Score derived from an approximate upper-bound of BLEU on segmentation boundaries, which allows us to disentangle the effect of good segmentation from text quality. To compare Sigma with existing metrics, we further propose a boundary projection method from imperfect hypotheses to the true reference. Results show that all metrics are able to reward high quality output but for similar outputs system ranking depends on each metric`s sensitivity to error type. Our thorough analyses suggest Sigma is a promising segmentation candidate but its reliability over other segmentation metrics remains to be validated through correlations with human judgements.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,735 |
inproceedings
|
rapp-2022-using
|
Using Semantic Role Labeling to Improve Neural Machine Translation
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.329/
|
Rapp, Reinhard
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3079--3083
|
Despite impressive progress in machine translation in recent years, it has occasionally been argued that current systems are still mainly based on pattern recognition and that further progress may be possible by using text understanding techniques, thereby e.g. looking at semantics of the type {\textquotedblleft}Who is doing what to whom?{\textquotedblright}. In the current research we aim to take a small step into this direction. Assuming that semantic role labeling (SRL) grasps some of the relevant semantics, we automatically annotate the source language side of a standard parallel corpus, namely Europarl, with semantic roles. We then train a neural machine translation (NMT) system using the annotated corpus on the source language side, and the original unannotated corpus on the target language side. New text to be translated is first annotated by the same SRL system and then fed into the translation system. We compare the results to those of a baseline NMT system trained with unannotated text on both sides and find that the SRL-based system yields small improvements in terms of BLEU scores for each of the four language pairs under investigation, involving English, French, German, Greek and Spanish.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,736 |
inproceedings
|
bandyopadhyay-etal-2022-deep
|
A Deep Transfer Learning Method for Cross-Lingual Natural Language Inference
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.330/
|
Bandyopadhyay, Dibyanayan and De, Arkadipta and Gain, Baban and Saikh, Tanik and Ekbal, Asif
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3084--3092
|
Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), has been one of the central tasks in Artificial Intelligence (AI) and Natural Language Processing (NLP). RTE between the two pieces of texts is a crucial problem, and it adds further challenges when involving two different languages, i.e., in the cross-lingual scenario. This paper proposes an effective transfer learning approach for cross-lingual NLI. We perform experiments on English-Hindi language pairs in the cross-lingual setting to find out that our novel loss formulation could enhance the performance of the baseline model by up to 2{\%}. To assess the effectiveness of our method further, we perform additional experiments on every possible language pair using four European languages, namely French, German, Bulgarian, and Turkish, on top of XNLI dataset. Evaluation results yield up to 10{\%} performance improvement over the respective baseline models, in some cases surpassing the state-of-the-art (SOTA). It is also to be noted that our proposed model has 110M parameters which is much lesser than the SOTA model having 220M parameters. Finally, we argue that our transfer learning-based loss objective is model agnostic and thus can be used with other deep learning-based architectures for cross-lingual NLI.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,737 |
inproceedings
|
shardlow-alva-manchego-2022-simple
|
Simple {TICO}-19: A Dataset for Joint Translation and Simplification of {COVID}-19 Texts
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.331/
|
Shardlow, Matthew and Alva-Manchego, Fernando
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3093--3102
|
Specialist high-quality information is typically first available in English, and it is written in a language that may be difficult to understand by most readers. While Machine Translation technologies contribute to mitigate the first issue, the translated content will most likely still contain complex language. In order to investigate and address both problems simultaneously, we introduce Simple TICO-19, a new language resource containing manual simplifications of the English and Spanish portions of the TICO-19 corpus for Machine Translation of COVID-19 literature. We provide an in-depth description of the annotation process, which entailed designing an annotation manual and employing four annotators (two native English speakers and two native Spanish speakers) who simplified over 6,000 sentences from the English and Spanish portions of the TICO-19 corpus. We report several statistics on the new dataset, focusing on analysing the improvements in readability from the original texts to their simplified versions. In addition, we propose baseline methodologies for automatically generating the simplifications, translations and joint translation and simplifications contained in our dataset.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,738 |
inproceedings
|
adjali-etal-2022-building
|
Building Comparable Corpora for Assessing Multi-Word Term Alignment
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.332/
|
Adjali, Omar and Morin, Emmanuel and Zweigenbaum, Pierre
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3103--3112
|
Recent work has demonstrated the importance of dealing with Multi-Word Terms (MWTs) in several Natural Language Processing applications. In particular, MWTs pose serious challenges for alignment and machine translation systems because of their syntactic and semantic properties. Thus, developing algorithms that handle MWTs is becoming essential for many NLP tasks. However, the availability of bilingual and more generally multi-lingual resources is limited, especially for low-resourced languages and in specialized domains. In this paper, we propose an approach for building comparable corpora and bilingual term dictionaries that help evaluate bilingual term alignment in comparable corpora. To that aim, we exploit parallel corpora to perform automatic bilingual MWT extraction and comparable corpus construction. Parallel information helps to align bilingual MWTs and makes it easier to build comparable specialized sub-corpora. Experimental validation on an existing dataset and on manually annotated data shows the interest of the proposed methodology.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,739 |
inproceedings
|
solmundsdottir-etal-2022-mean
|
Mean Machine Translations: On Gender Bias in {I}celandic Machine Translations
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.333/
|
S{\'olmundsd{\'ottir, Agnes and Gu{\dhmundsd{\'ottir, Dagbj{\"ort and Stef{\'ansd{\'ottir, Lilja Bj{\"ork and Ingason, Anton
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3113--3121
|
This paper examines machine bias in language technology. Machine bias can affect machine learning algorithms when language models trained on large corpora include biased human decisions or reflect historical or social inequities, e.g. regarding gender and race. The focus of the paper is on gender bias in machine translation and we discuss a study conducted on Icelandic translations in the translation systems Google Translate and V{\'e}l{\th}{\'y}{\dh}ing.is. The results show a pattern which corresponds to certain societal ideas about gender. For example it seems to depend on the meaning of adjectives referring to people whether they appear in the masculine or feminine form. Adjectives describing positive personality traits were more likely to appear in masculine gender whereas the negative ones frequently appear in feminine gender. However, the opposite applied to appearance related adjectives. These findings unequivocally demonstrate the importance of being vigilant towards technology so as not to maintain societal inequalities and outdated views {---} especially in today`s digital world.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,740 |
inproceedings
|
enayet-sukthankar-2022-analysis
|
An Analysis of Dialogue Act Sequence Similarity Across Multiple Domains
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.334/
|
Enayet, Ayesha and Sukthankar, Gita
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3122--3130
|
This paper presents an analysis of how dialogue act sequences vary across different datasets in order to anticipate the potential degradation in the performance of learned models during domain adaptation. We hypothesize the following: 1) dialogue sequences from related domains will exhibit similar n-gram frequency distributions 2) this similarity can be expressed by measuring the average Hamming distance between subsequences drawn from different datasets. Our experiments confirm that when dialogue acts sequences from two datasets are dissimilar they lie further away in embedding space, making it possible to train a classifier to discriminate between them even when the datasets are corrupted with noise. We present results from eight different datasets: SwDA, AMI (DialSum), GitHub, Hate Speech, Teams, Diplomacy Betrayal, SAMsum, and Military (Army). Our datasets were collected from many types of human communication including strategic planning, informal discussion, and social media exchanges. Our methodology provides intuition on the generalizability of dialogue models trained on different datasets. Based on our analysis, it is problematic to assume that machine learning models trained on one type of discourse will generalize well to other settings, due to contextual differences.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,741 |
inproceedings
|
okahisa-etal-2022-constructing
|
Constructing a Culinary Interview Dialogue Corpus with Video Conferencing Tool
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.335/
|
Okahisa, Taro and Tanaka, Ribeka and Kodama, Takashi and Huang, Yin Jou and Kurohashi, Sadao
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3131--3139
|
Interview is an efficient way to elicit knowledge from experts of different domains. In this paper, we introduce CIDC, an interview dialogue corpus in the culinary domain in which interviewers play an active role to elicit culinary knowledge from the cooking expert. The corpus consists of 308 interview dialogues (each about 13 minutes in length), which add up to a total of 69,000 utterances. We use a video conferencing tool for data collection, which allows us to obtain the facial expressions of the interlocutors as well as the screen-sharing contents. To understand the impact of the interlocutors' skill level, we divide the experts into {\textquotedblleft}semi-professionals'{\textquotedblright} and {\textquotedblleft}enthusiasts{\textquotedblright} and the interviewers into {\textquotedblleft}skilled interviewers{\textquotedblright} and {\textquotedblleft}unskilled interviewers.{\textquotedblright} For quantitative analysis, we report the statistics and the results of the post-interview questionnaire. We also conduct qualitative analysis on the collected interview dialogues and summarize the salient patterns of how interviewers elicit knowledge from the experts. The corpus serves the purpose to facilitate future research on the knowledge elicitation mechanism in interview dialogues.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,742 |
inproceedings
|
yusupujiang-ginzburg-2022-ugchdial
|
{U}g{C}h{D}ial: A {U}yghur Chat-based Dialogue Corpus for Response Space Classification
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.336/
|
Yusupujiang, Zulipiye and Ginzburg, Jonathan
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3140--3149
|
In this paper, we introduce a carefully designed and collected language resource: UgChDial {--} a Uyghur dialogue corpus based on a chatroom environment. The Uyghur Chat-based Dialogue Corpus (UgChDial) is divided into two parts: (1). Two-party dialogues and (2). Multi-party dialogues. We ran a series of 25, 120-minutes each, two-party chat sessions, totaling 7323 turns and 1581 question-response pairs. We created 16 different scenarios and topics to gather these two-party conversations. The multi-party conversations were compiled from chitchats in general channels as well as free chats in topic-oriented public channels, yielding 5588 unique turns and 838 question-response pairs. The initial purpose of this corpus is to study query-response pairs in Uyghur, building on an existing fine-grained response space taxonomy for English. We provide here initial annotation results on the Uyghur response space classification task using UgChDial.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,743 |
inproceedings
|
sudo-etal-2022-speculative
|
A Speculative and Tentative Common Ground Handling for Efficient Composition of Uncertain Dialogue
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.337/
|
Sudo, Saki and Asano, Kyoshiro and Mitsuda, Koh and Higashinaka, Ryuichiro and Takeuchi, Yugo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3150--3157
|
This study investigates how the grounding process is composed and explores new interaction approaches that adapt to human cognitive processes that have not yet been significantly studied. The results of an experiment indicate that grounding through dialogue is mutually accepted among participants through holistic expressions and suggest that common ground among participants may not necessarily be formed in a bottom-up way through analytic expressions. These findings raise the possibility of a promising new approach to creating a human-like dialogue system that may be more suitable for natural human communication.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,744 |
inproceedings
|
aguirre-etal-2022-basco
|
{B}a{SC}o: An Annotated {B}asque-{S}panish Code-Switching Corpus for Natural Language Understanding
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.338/
|
Aguirre, Maia and Garc{\'i}a-Sardi{\~n}a, Laura and Serras, Manex and M{\'e}ndez, Ariane and L{\'o}pez, Jacobo
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3158--3163
|
The main objective of this work is the elaboration and public release of BaSCo, the first corpus with annotated linguistic resources encompassing Basque-Spanish code-switching. The mixture of Basque and Spanish languages within the same utterance is popularly referred to as Euska{\~n}ol, a widespread phenomenon among bilingual speakers in the Basque Country. Thus, this corpus has been created to meet the demand of annotated linguistic resources in Euska{\~n}ol in research areas such as multilingual dialogue systems. The presented resource is the result of translating to Euska{\~n}ol a compilation of texts in Basque and Spanish that were used for training the Natural Language Understanding (NLU) models of several task-oriented bilingual chatbots. Those chatbots were meant to answer specific questions associated with the administration, fiscal, and transport domains. In addition, they had the transverse potential to answer to greetings, requests for help, and chit-chat questions asked to chatbots. BaSCo is a compendium of 1377 tagged utterances with every sample annotated at three levels: (i) NLU semantic labels, considering intents and entities, (ii) code-switching proportion, and (iii) domain of origin.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,745 |
inproceedings
|
kraus-etal-2022-prodial
|
{P}ro{D}ial {--} An Annotated Proactive Dialogue Act Corpus for Conversational Assistants using Crowdsourcing
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.339/
|
Kraus, Matthias and Wagner, Nicolas and Minker, Wolfgang
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3164--3173
|
Robots will eventually enter our daily lives and assist with a variety of tasks. Especially in the household domain, robots may become indispensable helpers by overtaking tedious tasks, e.g. keeping the place tidy. Their effectiveness and efficiency, however, depend on their ability to adapt to our needs, routines, and personal characteristics. Otherwise, they may not be accepted and trusted in our private domain. For enabling adaptation, the interaction between a human and a robot needs to be personalized. Therefore, the robot needs to collect personal information from the user. However, it is unclear how such sensitive data can be collected in an understandable way without losing a user`s trust in the system. In this paper, we present a conversational approach for explicitly collecting personal user information using natural dialogue. For creating a sound interactive personalization, we have developed an empathy-augmented dialogue strategy. In an online study, the empathy-augmented strategy was compared to a baseline dialogue strategy for interactive personalization. We have found the empathy-augmented strategy to perform notably friendlier. Overall, using dialogue for interactive personalization has generally shown positive user reception.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,746 |
inproceedings
|
nedoluzhko-etal-2022-elitr
|
{ELITR} Minuting Corpus: A Novel Dataset for Automatic Minuting from Multi-Party Meetings in {E}nglish and {C}zech
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.340/
|
Nedoluzhko, Anna and Singh, Muskaan and Hled{\'i}kov{\'a}, Marie and Ghosal, Tirthankar and Bojar, Ond{\v{r}}ej
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3174--3182
|
Taking minutes is an essential component of every meeting, although the goals, style, and procedure of this activity ({\textquotedblleft}minuting{\textquotedblright} for short) can vary. Minuting is a rather unstructured writing activity and is affected by who is taking the minutes and for whom the intended minutes are. With the rise of online meetings, automatic minuting would be an important benefit for the meeting participants as well as for those who might have missed the meeting. However, automatically generating meeting minutes is a challenging problem due to a variety of factors including the quality of automatic speech recorders (ASRs), availability of public meeting data, subjective knowledge of the minuter, etc. In this work, we present the first of its kind dataset on \textit{Automatic Minuting}. We develop a dataset of English and Czech technical project meetings which consists of transcripts generated from ASRs, manually corrected, and minuted by several annotators. Our dataset, AutoMin, consists of 113 (English) and 53 (Czech) meetings, covering more than 160 hours of meeting content. Upon acceptance, we will publicly release (aaa.bbb.ccc) the dataset as a set of meeting transcripts and minutes, excluding the recordings for privacy reasons. A unique feature of our dataset is that most meetings are equipped with more than one minute, each created independently. Our corpus thus allows studying differences in what people find important while taking the minutes. We also provide baseline experiments for the community to explore this novel problem further. To the best of our knowledge \textbf{AutoMin} is probably the first resource on minuting in English and also in a language other than English (Czech).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,747 |
inproceedings
|
fraser-etal-2022-extracting
|
Extracting Age-Related Stereotypes from Social Media Texts
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.341/
|
Fraser, Kathleen C. and Kiritchenko, Svetlana and Nejadgholi, Isar
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3183--3194
|
Age-related stereotypes are pervasive in our society, and yet have been under-studied in the NLP community. Here, we present a method for extracting age-related stereotypes from Twitter data, generating a corpus of 300,000 over-generalizations about four contemporary generations (baby boomers, generation X, millennials, and generation Z), as well as {\textquotedblleft}old{\textquotedblright} and {\textquotedblleft}young{\textquotedblright} people more generally. By employing word-association metrics, semi-supervised topic modelling, and density-based clustering, we uncover many common stereotypes as reported in the media and in the psychological literature, as well as some more novel findings. We also observe trends consistent with the existing literature, namely that definitions of {\textquotedblleft}young{\textquotedblright} and {\textquotedblleft}old{\textquotedblright} age appear to be context-dependent, stereotypes for different generations vary across different topics (e.g., work versus family life), and some age-based stereotypes are distinct from generational stereotypes. The method easily extends to other social group labels, and therefore can be used in future work to study stereotypes of different social categories. By better understanding how stereotypes are formed and spread, and by tracking emerging stereotypes, we hope to eventually develop mitigating measures against such biased statements.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,748 |
inproceedings
|
alvarez-mellado-lignos-2022-borrowing
|
Borrowing or Codeswitching? Annotating for Finer-Grained Distinctions in Language Mixing
|
Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios
|
jun
|
2022
|
Marseille, France
|
European Language Resources Association
|
https://aclanthology.org/2022.lrec-1.342/
|
Alvarez-Mellado, Elena and Lignos, Constantine
|
Proceedings of the Thirteenth Language Resources and Evaluation Conference
|
3195--3201
|
We present a new corpus of Twitter data annotated for codeswitching and borrowing between Spanish and English. The corpus contains 9,500 tweets annotated at the token level with codeswitches, borrowings, and named entities. This corpus differs from prior corpora of codeswitching in that we attempt to clearly define and annotate the boundary between codeswitching and borrowing and do not treat common {\textquotedblleft}internet-speak{\textquotedblright} (lol, etc.) as codeswitching when used in an otherwise monolingual context. The result is a corpus that enables the study and modeling of Spanish-English borrowing and codeswitching on Twitter in one dataset. We present baseline scores for modeling the labels of this corpus using Transformer-based language models. The annotation itself is released with a CC BY 4.0 license, while the text it applies to is distributed in compliance with the Twitter terms of service.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,749 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.