id
string | year
int64 | authors
string | title
string | language
string | num_authors
int64 | num_women_authors
int64 | woman_as_first_author
int64 | affiliations
string | num_affilitations
int64 | at_least_one_international_affiliation
int64 | international_affilitation_only
int64 | international_authors
int64 | names_international_authors
string | num_compay_authors
int64 | names_company_authors
string | countries_affilitations
string | cities_affiliations
string | abstract
string | introduction
string | conclusion
string | num_topics
int64 | topics
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1_2014
| 2,014 |
Azad Abad, Alessandro Moschitti
|
Creating a standard for evaluating Distant Supervision for Relation Extraction
|
ENG
| 2 | 0 | 0 |
Università di Trento, Qatar Computing Research Institute
| 2 | 1 | 0 | 1 |
Alessandro Moschitti
| 0 |
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
This paper defines a standard for comparing relation extraction (RE) systems based on a Distant Supervision (DS). We integrate the well-known New York Time corpus with the more recent version of Freebase. Then, we define a simpler RE system based on DS, which exploits SVMs, tree kernels and a simple one-vs-all strategy. The resulting model can be used as a baseline for system comparison. We also study several example filtering techniques for improving the quality of the DS output.
|
Currently, supervised learning approaches are widely used to train relation extractors. However, manually providing large-scale human-labeled training data is costly in terms of resources and time. Besides, (i) a small-size corpus can only contains few relation types and (ii) the resulting trained model is domain-dependent. Distance Supervision (DS) is an alternative approach to overcome the problem of data annotation (Craven et al., 1999) as it can automatically generate training data by combining (i) a structured Knowledge Base (KB), e.g., Freebase1 with a large-scale unlabeled corpus, C. The basic idea is: given a tuple r<e1,e2> contained in a referring KB, if both e1 and e2 appear in a sentence of C, that sentence is assumed to express the relation type r, i.e., it is considered a training sentence for r. For example, given the KB relation, president(Obama,USA), the following sentence, Obama has been elected in the USA presidential campaign, can be used as a positive training example for president(x,y). However, DS suffers from two major drawbacks: first, in early studies, Mintz et al. (2009) assumed that two entity mentions cannot be in a relation with different relation types r1 and r2. In contrast, Hoffmann et al. (2011) showed that 18.3% of the entities in Freebase that also occur in the New York Times 2007 corpus (NYT) overlap with more than one relation type. Second, although DS method has shown some promising results, its accuracy suffers from noisy training data caused by two types of problems (Hoffmann et al., 2011; Intxaurrondo et al., 2013; Riedel et al., 2010): (i) possible mismatch between the sentence semantics and the relation type mapped in it, e.g., the KB correct relation, located in(Renzi,Rome), cannot be mapped into the sentence, Renzi does not love the Rome soccer team; and (ii) coverage of the KB, e.g., a sentence can express relations that are not in the KB (this generates false negatives). Several approaches for selecting higher quality training sentences with DS have been studies but comparing such methods is difficult for the lack of well-defined benchmarks and models using DS. In this paper, we aim at building a standard to compare models based on DS: first of all, we considered the most used corpus in DS, i.e., the combination of NYT and Freebase (NYT-FB). Secondly, we mapped the Freebase entity IDs used in NYT-FB from the old version of 2007 to the newer Freebase 2014. Since entities changed, we asked an annotator to manually tag the entity mentions in the sentence. As the result, we created a new dataset usable as a stand-alone DS corpus, which we make available for research purposes. Questo e-book appartiene a AlessandroLenci Finally, all the few RE models experimented with NYT-FB in the past are based on a complex conditional random fields. This is necessary to encode the dependencies between the overlapping relations. Additionally, such models use very particular and sparse features, which make the replicability of the models and results complex, thus limiting the research progress in DS. Indeed, for comparing a new DS approach with the previous work using NYT-FB, the researcher is forced to re-implement a very complicated model and its sparse features. Therefore, we believe that simpler models can bevery useful as (i) a much simpler reimplementation would enable model comparisons and (ii) it would be easier to verify if a DS method is better than another. In this perspective, our proposed approach is based on convolution tree kernels, which can easily exploit syntactic/semantic structures. This is an important aspect to favor replicability of our results. Moreover, our method differers from previous state of the art on overlapping relations (Riedel et al., 2010) as we apply a modification of the simple one-vs-all strategy, instead of the complex graphical models. To make our approach competitive, we studied several parameters for optimizing SVMsandfiltering out noisy negative training examples. Our extensive experiments show that our models achieve satisfactory results.
|
We have proposed a standard framework, simple RE models and an upgraded version of NYT-FB for more easily measuring the research progress in DS research. Our RE model is based on SVMs, can manage overlapping relations and exploit syntactic information and lexical features thanks to tree kernels. Additionally, we have shown that filtering techniques applied to DS data can discard noisy examples and significantly improve the RE accuracy.
| 7 |
Lexical and Semantic Resources and Analysis
|
2_2014
| 2,014 |
Paolo Annesi, Danilo Croce, Roberto Basili
|
Towards Compositional Tree Kernels
|
ENG
| 3 | 0 | 0 |
Università di Roma Tor Vergata
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Rome
|
Several textual inference tasks rely on kernel-based learning. In particular Tree Kernels (TKs) proved to be suitable to the modeling of syntactic and semantic similarity between linguistic instances. In order to generalize the meaning of linguistic phrases, Distributional Compositional Semantics (DCS) methods have been defined to compositionally combine the meaning of words in semantic spaces. However, TKs still do not account for compositionality. A novel kernel, i.e. the Compositional Tree Kernel, is presented integrating DCS operators in the TK estimation. The evaluation over Question Classification and Metaphor Detection shows the contribution of semantic compositions w.r.t. traditional TKs.
|
Tree Kernels (TKs) (Collins and Duffy, 2001) are consolidated similarity functions used in NLP for their ability in capturing syntactic information directly from parse trees and used to solve complex tasks such as Question Answering (Moschitti et al., 2007) or Semantic Textual Similarity (Croce et al., 2012). The similarity between parse tree structures is defined in terms of all possible syntagmatic substructures. Recently, the Smoothed Partial Tree Kernel (SPTK) has been defined in (Croce et al., 2011): the semantic information of the lexical nodes in a parse tree enables a smoothed similarity between structures, which are partially similar and whose nodes can differ but are nevertheless related. Semantic similarity between words is evaluated in terms of vector similarity in a Distributional Semantic Space (Sahlgren, 2006; Turney and Pantel, 2010; Baroni and Lenci, 2010). Even if achieving higher performances w.r.t. traditional TKs, the main limitations of SPTK are that the discrimination between words is delegated only to the lexical nodes and semantic composition of words is not considered. Questo e-book appartiene a AlessandroLenci Weinvestigate a kernel function that exploits semantic compositionality to measures the similarity between syntactic structures. In our perspective the semantic information should be emphasized by compositionally propagating lexical information over an entire parse tree, making explicit the head/modifier relationships between words. It enables the application of Distributional Compositional Semantic (DCS) metrics, that combine lexical representations by vector operator into the distributional space (Mitchell and Lapata, 2008; Erk and Pado, 2008; Zanzotto et al., 2010; Baroni and Lenci, 2010; Grefenstette and Sadrzadeh, 2011; Blacoe and Lapata, 2012; Annesi et al., 2012), within the TKs computation. The idea is to i) def ine a procedure to mark nodes of a parse tree that allows to spread lexical bigrams across the tree nodes ii) apply DCS smoothing metrics between such compositional nodes iii) enrich the SPTK formulation with compositional distributional semantics. The resulting model has been called Compositional Smoothed Partial Tree Kernel (CSPTK). The entire process of marking parse trees is described in Section 2. Therefore, in Section 3 the CSPTK is presented. Finally, in Section 4, the evaluations over Question Classification and Metaphor Detection tasks are shown.
|
In this paper, a novel kernel function has been proposed in order to exploit Distributional Compositional operators within Tree Kernels. The proposed approach propagates lexical semantic information over an entire tree, by building a Compositionally labeled Tree. The resulting Compositional Smoothed Partial Tree Kernel measures the semantic similarity between complex linguistic structures by applying metrics sensible to distributional compositional semantics. Empirical results in the Question Classification and Metaphor Detection tasks demonstrate the positive contribution of compositional information for the generalization capability within the proposed kernel.
| 22 |
Distributional Semantics
|
3_2014
| 2,014 |
Zhenisbek Assylbekov, Assulan Nurkas
|
Initial Explorations in Kazakh to English Statistical Machine Translation
|
ENG
| 2 | 0 | 0 |
Nazarbayev University
| 1 | 1 | 1 | 2 |
Zhenisbek Assylbekov, Assulan Nurkas
| 0 |
0
|
Kazakhstan
|
Astana
|
This paper presents preliminary results of developing a statistical machine translation system from Kazakh to English. Starting with a baseline model trained on 1.3K and then on 20K aligned sentences, we tried to cope with the complex morphology of Kazakh by applying different schemes of morphological word segmentation to the training and test data. Morphological segmentation appears to benefit our system: our best segmentation scheme achieved a 28% reduction of out-of-vocabulary rate and 2.7 point BLEU improvement above the baseline.
|
The availability of considerable amounts of parallel texts in Kazakh and English has motivated us to apply statistical machine translation (SMT) paradigm for building a Kazakh-to-English machine translation system using publicly available data and open-source tools. The main ideas of SMT were introduced by researchers at IBM’s Thomas J. Watson Research Center (Brown et al., 1993). This paradigm implies that translations are generated on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. We show how one can compile a Kazakh-English parallel corpus from publicly available resources in Section 2. It is well known that challenges arise in statistical machine translation when we deal with languages with complex morphology, e.g. Kazakh. However recently there were attempts to tackle such challenges for similar languages by morphological pre-processing of the source text (Bisazza and Federico, 2009; Habash and Sadat, 2006; Mermer, 2010). We apply morphological preprocessing techniques to Kazakh side of our corpus and show how they improve translation performance in Sections 5 and 6.
|
The experiments have shown that a selective morphological segmentation improves the performance of an SMT system. One can see that in contrast to Bisazza and Federico’s results (2009), in our case MS11 downgrades the translation performance. One of the reasons for this might be that Bisazza and Federico considered translation of spokenl anguage in which sentences weres horter on average than in our corpora. In this work we mainly focused on nominal suffixation. In our future work we are planning to: increase the dictionary of morphological transducer – currently it covers 93.3% of our larger corpus; improve morphological disambiguation using e.g. perceptron algorithm (Saket al., 2007); develop more segmentation rules for verbs and other parts of speech; mine more mono- and bilingual data using officia lwebsites of Kazakhstan’s public authorities.
| 10 |
Machine Translation
|
4_2014
| 2,014 |
Giuseppe Attardi, Vittoria Cozza, Daniele Sartiano
|
Adapting Linguistic Tools for the Analysis of Italian Medical Records
|
ENG
| 3 | 1 | 0 |
Università di Pisa
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
We address the problem of recognition of medical entities in clinical records written in Italian. We report on experiments performed on medical data in English provided in the shared tasks at CLEF-ER 2013 and SemEval 2014. This allowed us to refine Named Entity recognition techniques to deal with the specifics of medical and clinical language in particular. We present two approaches for transferring the techniques to Italian. One solution relies on the creation of an Italian corpus of annotated clinical records and the other on adapting existing linguistic tools to the medical domain.
|
One of the objectives of the RIS project (RIS 2014) is to develop tools and techniques to help identifying patients at risk of evolving their disease into a chronic condition. The study relies on a sample of patient data consisting of both medical test reports and clinical records. We are interested in verifying whether text analytics, i.e. information extracted from natural language texts, can supplement or improve information extracted from the more structured data available in the medical test records. Clinical records are expressed as plain text in natural language and contain mentions of diseases or symptoms affecting a patient, whose accurate identification is crucial for any further text mining process. Our task in the project is to provide a set of NLP tools for extracting automatically information from medical reports in Italian. We are facing the double challenge of adapting NLP tools to the medical domain and of handling documents in a language (Italian) for which there are few available linguistic resources. Our approach to information extraction exploits both supervised machine-learning tools, which require annotated training corpora, and unsupervised deep learning techniques, in order to leverage unlabeled data. For dealing with the lack of annotated Italian resources for the bio-medical domain, we attempted to create a silver corpus with a semi-automatic approach that uses both machine translation and dictionary based techniques. The corpus will be validated through crowdsourcing.
|
We presented a series of experiments on biomedical texts from both medical literature and clinical records, in multiple languages, that helped us to refine the techniques of NE recognition and to adapt them to Italian. We explored supervised techniques as well as unsupervised ones, in the form of word embeddings or word clusters. We also developed a Deep Learning NE tagger that exploits embeddings. The best results were achieved by using a MEMM sequence labeler using clusters as features improved in an ensemble combination with other NE taggers. As an further contribution of our work, we produced, by exploiting semi-automated techniques, an Italian corpus of medical records, annotated with mentions of medical terms.
| 20 |
In-domain IR and IE
|
5_2014
| 2,014 |
Alessia Barbagli, Pietro Lucisano, Felice Dell'Orletta, Simonetta Montemagni
|
Tecnologie del linguaggio e monitoraggio dell'evoluzione delle abilità di scrittura nella scuola secondaria di primo grado
|
ITA
| 5 | 2 | 1 |
Sapienza Università di Roma, CNR-ILC
| 2 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Rome, Pisa
|
The last decade has seen the increased international use of language technologies for the study of learning processes. This contribution, which is placed within a wider research of experimental pedagogy, reports the first and promising results of a study aimed at monitoring the evolution of the Italian language learning process conducted from the written production of students with tools of automatic linguistic notification and knowledge extraction.
|
The use of language technologies for the study of learning processes and, in more applied terms, the construction of the so-called Intelligent Computer-Assisted Language Learning systems (ICALL) `and more and more at the center of interdisciplinary research that aims to highlight how methods and tools of automatic language notification and knowledge extraction are now mature to be used also in the educational and school context. On the international level, this is demonstrated by the success of the Workshop on Innovative Use of NLP for Building Educational Applications (BEA), arrived puts in this research context, reporting the first results of a study still ongoing, aimed at describing, with means of quantitative and qualitative character, the evolution of the writing skills, both at the level of text content and linguistic skills, from the first to the second class of the secondary school. It is an exploration work aimed at building an empirical analysis model that allows the observation of the processes and products of the teaching of written production. The innovative character of this research in the national and international landscape is placed at various levels. The research described here represents the first study aimed at monitoring the evolution of the language learning process of the Italian language conducted from the written production of the students and with tools of automatic language notification and knowledge extraction. The use of language technologies for monitoring the evolution of language skills of learners drops the roots in a branch of studies launched internationally in the last decade and within which language analysis generated by tools of automatic language treatment are used, for example, to: monitor the development of synthesis in child language (Sagae et al., 2005; Lu, 2007); identify cognitive deficits through measures of synthetic complexity (Roark et al., 2007) or of semantic association (Rouhizadeh et al., 2013); monitor the reading capacity as a central component of language competence (Schwarmend and Ostendorf, 2005; and Petersen, 2009). Take the moves from this research branch, Dell’Orletta and Montemagni (2012) and Dell’Orletta et al. (2011) they showed in two feasibility studies that linguistic-computer technologies can play a central role in evaluating the Italian language skills of students in the school and in tracing their evolution over time. This contribution represents an original and innovative development of this research line, as the language monitoring methodology proposed `and used within a wider study of experimental pedagogy, based on a significant corpus of written productions of students and aimed at tracing the evolution of skills in a diacronic and/or socio-cultural perspective. The subject of the analyses represents another element of novit`a: `and was chosen the first twentieth of first-degree secondary school as a school area to be analyzed because' and little investigated by empirical research and because' and have been few so far the studies that have verified the effective teaching practice derived from the indications provided by the ministerial programs relating to this school cycle, from 1979 to the National Indications of 2012.
|
The comparative monitoring of the language characteristics traced in the corpus of common tests carried out in the first and second year was carried out with the aim of tracing the evolution of the language skills of the students in the two years. The ANOVA common evidence shows that there are significant differences between the first and second years at all levels of linguistic analysis considered. For example, compared to the Odi base’ cattery, it turns out that the variation of the average number of tokens per phrase in the two tests of the two years is significant. While the tests written in the first year contain long phrases in an average of 23.82 tokens, the average length of the tests in the second year is 20.71 tokens. Significant `and also the variation in the use of VdB-related voices, which decreases from 83% of the vocabulary in the first year tests to 79% in the second year, such as the TTR values (related to the first 100 tokens), which increase by passing from 0.66 to 0.69. In both cases, such changes can be seen as a result of a lessical enrichment. When it comes to the morphosynthetic level, the characteristics that capture the use of times and verbal ways are significant. At the level of synthetic monitoring, `and for example the use of the object supplement in pre– or post-verbal position to vary significantly. If in the first year’s tests 19% of the subject complements are in pre-verbal position, in the second year the percentage decreases by passing to 13%; while in the first year the post-verbal complements are 81% and increase by passing to 87% in the second year. In the tests of the second year therefore is observed a greater respect for the canonical order subject-word-object, closer to the rules of the writing than of the spoken. Although the results are still preliminary compared to the wider context of research, we believe they clearly show the potential of the meeting between computing and educational linguistics, opening new research prospects. Current activities include the analysis of the correlation between the evidence acquired through linguistic monitoring and the process and background variables such as the study of the evolution of the language skills of the individual student.
| 8 |
Learner Corpora and Language Acquisition
|
6_2014
| 2,014 |
Francesco Barbieri, Francesco Ronzano, Horacio Saggion
|
Italian Irony Detection in Twitter: a First Approach
|
ENG
| 3 | 0 | 0 |
Universitat Pompeu Fabra
| 1 | 1 | 1 | 3 |
Zhenisbek Assylbekov, Assulan Nurkas
| 0 |
0
|
Spain
|
Barcelona
|
Irony is a linguistic device used to say something but meaning something else. The distinctive trait of ironic utterances is the opposition of literal and intended meaning. This characteristic makes the automatic recognition of irony a challenging task for current systems. In this paper we present and evaluate the first automated system targeted to detect irony in Italian Tweets, introducing and exploiting a set of linguistic features useful for this task.
|
Sentiment Analysis is the interpretation of attitudes and opinions of subjects on certain topics. With the growth of social networks, Sentiment Analysis has become fundamental for customer reviews, opinion mining, and natural language user interfaces (Yasavur et al., 2014). During the last decade the number of investigations dealing with sentiment analysis has considerably increased, targeting most of the time English language. Comparatively and to the best of our knowledge there are only few works for the Italian language. In this paper we explore an important sentiment analysis problem: irony detection. Irony is a linguistic device used to say something when meaning something else (Quintilien and Butler, 1953). Dealing with figurative languages is one of the biggest challanges to correctly determine the polarity of a text: analysing phrases where literal and indented meaning are not the same, is hard for humans, hence even harder for machines. Moreover, systems able of detect irony can benefit also other A.I. areas like Human Computer Interaction. Approaches to detect irony have been already proposed for English, Portuguese and Dutch texts (see Section 2). Some of these systems used words, or word-patterns as irony detection features (Davidov et al., 2010; Gonz´ alez-Ib´ a˜ nez et al., 2011; Reyes et al., 2013; Buschmeier et al., 2014). Other approaches, like Barbieri and Saggion (2014a), exploited lexical and semantic features of single words like their frequency in reference corpora or the number of associated synsets. Relying on the latter method, in this paper we present the first system for automatic detection of irony in Italian Tweets. In particular, we investigate the effectiveness of Decision Trees in classifying Tweets as ironic or not ironic, showing that classification performances increase by considering lexical and semantic features of single words instead of pure bag-of-words (BOW) approaches. To train our system, we exploited as ironic examples the Tweets from the account of a famous collective blog named Spinoza and as not ironic examples the Tweets retrieved from the timelines of seven popular Italian newspapers.
|
In this study we evaluate a novel system to detect irony in Italian, focusing on Tweets. We tackle this problem as binary classification, where the ironic examples are posts of the Twitter account Spinoza and the non-ironic examples are Tweets from seven popular Italian newspapers. We evaluated the effectiveness of Decision Trees with different feature sets to carry out this classification task. Our system only focuses on characteristics on lexical and semantic information that characterises each word, rather than the words themselves as features. The performance of the system is good if compared to our baseline (BOW) considering only word occurrences as features, since we obtain an F1 improvement of 0.11. This result shows the suitability of our approach to detect ironic Italian Tweets. However, there is space to enrich and tune the model as this is only a first approach. It is possible to both improve the model with new features (for example related to punctuation or language models) and evaluate the system on new and extended corpora of Italian Tweets as they become available. Another issue we faced is the lack of accurate evaluations of features performance considering distinct classifiers / algorithms for irony detection.
| 6 |
Sentiment, Emotion, Irony, Hate
|
7_2014
| 2,014 |
Gianni Barlacchi, Massimo Nicosia, Alessandro Moschitti
|
A Retrieval Model for Automatic Resolution of Crossword Puzzles in Italian Language
|
ENG
| 3 | 0 | 0 |
Università di Trento, Qatar Computing Research Institute
| 2 | 1 | 0 | 2 |
Massimo Nicosia, Alessandro Moschitti
| 0 |
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
In this paper we study methods for improving the quality of automatic ex-traction of answer candidates for an ex-tremely challenging task: the automatic resolution of crossword puzzles for Italian language. Many automatic crossword puz-zle solvers are based on database system accessing previously resolved crossword puzzles. Our approach consists in query-ing the database (DB) with a search engine and converting its output into a probability score, which combines in a single scoring model, i.e., a logistic regression model, both the search engine score and statisti-cal similarity features. This improved re-trieval model greatly impacts the resolu-tion accuracy of crossword puzzles.
|
Crossword Puzzles (CPs) are probably one of the most popular language game. Automatic CP solvers have been mainly targeted by the artificial intelligence (AI) community, who has mostly fo-cused on AI techniques for filling the puzzle grid, given a set of answer candidates for each clue. The basic idea is to optimize the overall probability of correctly filling the entire grid by exploiting the likelihood of each candidate answer, fulfilling at the same time the grid constraints. After several failures in approaching the human expert perfor-mance, it has become clear that designing more accurate solvers would not have provided a win-ning system. In contrast, the Precision and Recall of the answer candidates are obviously a key fac-tor: very high values for these performance mea-sures would enable the solver to quickly find the correct solution. Similarly to the Jeopardy! challenge case (Fer-rucci et al., 2010), the solution relies on Ques-tion Answering (QA) research. However, although some CP clues are rather similar to standard ques-tions, there are some specific differences: (i) clues can be in interrogative form or not, e.g., Capi-tale d’Italia: Roma; (ii) they can contain riddles or be deliberately ambiguous and misleading (e.g., Se fugge sono guai: gas ); (iii) the exact length of the answer keyword is known in advance; and (vi) the confidence in the answers is an extremely important input for the CP solver. Questo e-book appartiene a AlessandroLenci There have been many attempts to build automatic CP solving systems. Their goal is to outperform human players in solving crosswords more accurately and in less time. Proverb (Littman et al., 2002) was the first system for the automatic resolution of CPs. It includes several modules for generating lists of candidate answers. These lists are merged and used to solve a ProbabilisticConstraint Satisfaction Problem. Proverb relies on a very large crossword database as well as several expert modules, each of them mainly based on domain-specific databases (e.g., movies, writers and geography). WebCrow (Ernandes et al., 2005) is based on Proverb. In addition to its predecessor, WebCrow carries out basic linguistic analysis such as Part-Of-Speech tagging and lemmatization. It takes advantage of semantic relations contained in WordNet, dictionaries and gazetteers. Its Web module is constituted by a search engine, which can retrieve text snippets or documents related to the clue. WebCrow uses a WA* algorithm (Pohl, 1970) for Probabilistic-Constraint Satisfaction Problems, adapted for CP resolution. To the best of our knowledge, the state-of-the-art system for automatic CP solving is Dr. Fill (Ginsberg, 2011). It targets the crossword filling task with a Weighted-Constraint Satisfaction Problem. Constraint violations are weighted and can be tolerated. It heavily relies on huge databases of clues. All of these systems queries the DB of previously solved CP clues using standard techniques, e.g., SQL Full-Text query. The DB is a very rich and important knowledge base. In order to improve the quality of the automatic extraction of answer candidate lists from DB, we provide for the Italian language a completely novel solution, by substituting the DB and the SQL function with a search engine for retrieving clues similar to the target one. In particular, we define a reranking function for the retrieved clues based on a logistic regression model (LRM), which combines the search engine score with other similarity features. To carry out our study, we created a clue similarity dataset for the Italian language. This dataset constitutes an interesting resource that we made available to the research community.
|
In this paper we improve the answer extraction from DB for automatic CP resolution. We combined the state-of-the-art BM25 retrieval model and an LRM by converting the BM25 score into a probability score for each answer candidate. For our study and to test our methods, we created a corpora for clue similarity containing clues in Italian. We improve on the lists generated by WebCrow by 8.5 absolute percent points in MRR. However, the end-to-end CP resolution test does not show a large improvement as the percentage of retrieved clues is not high enough.
| 1 |
Language Models
|
8_2014
| 2,014 |
Pierpaolo Basile, Annalina Caputo, Giovanni Semeraro
|
Analysing Word Meaning over Time by Exploiting Temporal Random Indexing
|
ENG
| 3 | 1 | 0 |
Università di Bari Aldo Moro
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Bari
|
This paper proposes an approach to the construction of WordSpaces which takes into account temporal information. The proposed method is able to build a geometrical space considering several pe-riods of time. This methodology enables the analysis of the time evolution of the meaning of a word. Exploiting this ap-proach, we build a framework, called Tem-poral Random Indexing (TRI) that pro-vides all the necessary tools for building WordSpaces and performing such linguis-tic analysis. We propose some examples of usage of our tool by analysing word meanings in two corpora: a collection of Italian books and English scientific papers about computational linguistics.
|
The analysis of word-usage statistics over huge corpora has become a common technique in many corpus linguistics tasks, which benefit from the growth rate of available digital text and compu-tational power. Better known as Distributional Semantic Models (DSM), such methods are an easy way for building geometrical spaces of con-cepts, also known as Semantic (or Word) Spaces, by skimming through huge corpora of text in order to learn the context of usage of words. In the re-sulting space, semantic relatedness/similarity be-tween two words is expressed by the closeness be-tween word-points. Thus, the semantic similarity can be computed as the cosine of the angle be-tween the two vectors that represent the words. DSM can be built using different techniques. One common approach is the Latent Semantic Analysis (Landauer and Dumais, 1997), which is based on the Singular Value Decomposition of the word co-occurrence matrix. However, many other methods that try to take into account the word order (Jones and Mewhort, 2007) or predications (Cohen et al., 2010) have been proposed. Recursive Neural Net-work (RNN) methodology (Mikolov et al., 2010) and its variant proposed in the word2vect frame-work (Mikolov et al., 2013) based on the con-tinuous bag-of-words and skip-gram model take a complete new perspective. However, most of these techniques build such SemanticSpaces tak-ing a snapshot of the word co-occurrences over the linguistic corpus. This makes the study of seman-tic changes during different periods of time diffi-cult to be dealt with. In this paper we show how one of such DSM techniques, called Random Indexing (RI) (Sahlgren, 2005; Sahlgren, 2006), can be easily extended to allow the analysis of semantic changes of words over time. The ultimate aim is to provide a tool which enables to understand how words change their meanings within a document corpus as a function of time. We choose RI for two main reasons: 1) the method is incremental and requires few computational resources while still retaining good performance; 2) the methodology for building the space can be easily expanded to integrate temporal information. Indeed, the disadvantage of classical DSM approaches is that WordSpaces built on different corpus are not comparable: it is always possible to compare similarities in terms of neighbourhood words or to combine vectors by geometrical operators, such as the tensor product, but these techniques do not allow a direct comparison of vectors belonging to two different spaces. Our approach based on RI is able to build a WordSpace on different time periods and makes all these spaces comparable to each another, actually enabling the analysis of word meaning changes over time by simple vector operations in WordSpaces. The paper is structured as follows: Section 2 provides details about the adopted methodology and the implementation of our framework. Some examples of the potentiality of our framework are reported in Section 3. Lastly, Section 4 closes the paper.
|
We propose a method for building WordSpaces taking into account information about time. In a WordSpace, words are represented as mathemati-cal points and the similarity is computed accord-ing to their closeness. The proposed framework, called TRI , is able to build several WordSpaces in different time periods and to compare vectors across the spaces to understand how the meaning of a word has changed over time. We reported some examples of our framework, which show the potential of our system in capturing word usage changes over time.
| 22 |
Distributional Semantics
|
9_2014
| 2,014 |
Pierpaolo Basile, Annalina Caputo, Giovanni Semeraro
|
Combining Distributional Semantic Models and Sense Distribution for Effective Italian Word Sense Disambiguation
|
ENG
| 3 | 1 | 0 |
Università di Bari Aldo Moro
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Bari
|
Distributional semantics ap-proaches have proven their ability to en-hance the performance of overlap-based Word Sense Disambiguation algorithms. This paper shows the application of such a technique to the Italian language, by analysing the usage of two different Dis-tributional Semantic Models built upon ItWaC and Wikipedia corpora, in con-junction with two different functions for leveraging the sense distributions. Results of the experimental evaluation show that the proposed method outperforms both the most frequent sense baseline and other state-of-the-art systems.
|
Given two words to disambiguate, Lesk (1986) al-gorithm selects those senses which maximise the overlap between their definitions (i.e. glosses), then resulting in a pairwise comparison between all the involved glosses. Since its original formula-tion, several variations of this algorithm have been proposed in an attempt of reducing its complex-ity, like the simplified Lesk (Kilgarriff and Rosen-zweig, 2000; Vasilescu et al., 2004), or maximizing the chance of overlap, like in the adapted version (Banerjee and Pedersen, 2002). One of the limitations of Lesk approach relies on the exact match between words in the sense definitions. Semantic similarity, rather than word overlap, has been proposed as a method to overcome such a limitation. Earlier approaches were based on the notion of semantic relatedness (Patwardhan et al., 2003) and tried to exploit the relationships between synsets in the WordNet graph. More recently, Distributional Semantic Models (DSM) have stood up as a way for computing such semantic similarity. DSM allow the representation of concepts in a geometrical space through word vectors. This kind of representation captures the semantic relatedness that occurs between words in paradigmatic relations, and enables the computation of semantic similarity between whole sentences. Broadening the definition of semantic relatedness, Patwardhan and Pedersen (2006) took into account WordNet contexts: a gloss vector is built for each word sense using its definition and those of related synsets in WordNet. A distributional thesaurus is used for the expansion of both glosses and the context in Miller et al. (2012), where the overlap is computed as in the original Lesk algorithm. More recently, Basile et al. (2014) proposed a variation of Lesk algorithm based on both the simplified and the adapted version. This method combines the enhanced overlap, given by the definitions of related synsets, with the reduced number of matching that are limited to the contextual words in the simplified version. The evaluation was conducted on the SemEval-2013 Multilingual Word Sense Disambiguation task (Navigli et al., 2013), and involved the use of BabelNet (Navigli and Ponzetto, 2012) as sense inventory. While performance for the English task was above the other task participants, the same behaviour was not reported for the Italian language. This paper proposes a deeper investigation of the algorithm described in Basile et al. (2014) for the Italian language. We analyse the effect on the disambiguation performance of the use of two different corpora for building the distributional space. Moreover, we introduce a new sense distribution function (SDfreq), based on synset frequency, and compare its capability in boosting the distributional Lesk algorithm with respect to the one proposed in Basile et al. (2014). The rest of the paper is structured as follows: Section 2 provides details about the Distributional Lesk algorithm and DSM, and defines the two above mentioned sense distribution functions exploited in this work. The evaluation, along with details about the two corpora and how the DSM are built, is presented in Section 3, which is followed by some conclusions about the presented results.
|
This paper proposed an analysis for the Italian language of anenhanced version of Lesk algorithm, which replaces the word overlap with distributional similarity. We analysed two DSM built over the ItWaC and Wikipedia corpus along with two sense distribution functions (SDprob and SDfreq). The sense distribution functions were computed over MultiSemCor, in order to avoid missing references between Italian and English synsets. The combination of the ItWaC-based DSM with the SDprob function resulted in the best overall result for the Italian portion of SemEvalTask-12 dataset.
| 7 |
Lexical and Semantic Resources and Analysis
|
10_2014
| 2,014 |
Valerio Basile
|
A Lesk-inspired Unsupervised Algorithm for Lexical Choice from WordNet Synsets
|
ENG
| 1 | 0 | 0 |
University of Groningen
| 1 | 1 | 1 | 1 |
Valerio Basile
| 0 |
0
|
Netherlands
|
Groningen
|
The generation of text from abstract meaning representations involves, among other tasks, the production of lexical items for the concepts to realize. Using WordNet as a foundational ontology, we exploit its internal network structure to predict the best lemmas for a given synset without the need for annotated data. Experiments based on re-generation and automatic evaluation show that our novel algorithm is more effective than a straightforward frequency-based approach.
|
Many linguists argue that true synonyms don’t exist (Bloomfield, 1933; Bolinger, 1968). Yet, words with similar meanings do exist and they play an important role in language technology where lexical resources such as WordNet (Fellbaum, 1998) employ synsets, sets of synonyms that cluster words with the same or similar meaning. It would be wrong to think that any member of a synset would be an equally good candidate for every application. Consider for instance the synset food, nutrient , a concept whose gloss in WordNet is “any substance that can be metabolized by an animal to give energy and build tissue”. In (1), this needs to be realized as “food”, but in (2) as “nutrient”. 1. It said the loss was significant in a region where fishing provides a vital source of foodnutrient. 2. The Kind-hearted Physician administered a stimulant, a tonic, and a foodnutrient, and went away. A straightforward solution based on n-gram models or grammatical constraint (“a food” is ungrammatical in the example above) is not always applicable, since it would be necessary to generate the complete sentence first, to exploit such features. This problem of lexical choice is what we want to solve in this paper. In a way it can be regarded as the reverse of WordNet-based Word Sense Disambiguation, where instead of determining the right synset for a certain word in a given context, the problem is to decide which word of a synset is the best choice in a given context. Lexical choice is a key task in the larger framework of Natural Language Generation, where an ideal model has to produce varied, naturalsounding utterances. In particular, generation from purely semantic structures, carrying little to no syntactic or lexical information, needs solutions that do not depend on pre-made choices of words to express generic concepts. The input to a lexical choice component in this context is some abstract representation of meaning that may specify to different extent the linguistic features that the expected output should have. Questo e-book appartiene a AlessandroLenci WordNet synsets are good candidate representations of word meanings, as WordNet could be seen as a dictionary, where each synset has its own definition in written English. WordNet synsets are also well suited for lexical choice, because they consist in actual sets of lemmas, considered to be synonyms of each other in specific contexts. Thus, the problem presented here is restricted to the choice of lemmas from WordNet synsets. Despite its importance, the task of lexical choice problem is not broadly considered by the NLGcommunity,oneofthereasonsbeingthatitis hard to evaluate. Information retrieval techniques fail to capture not-so-wrong cases, i.e. when a system produces a different lemma from the gold standard but still appropriate to the context. In this paper we present an unsupervised methodtoproducelemmasfromWordNetsynsets, inspired by the literature on WSD and applicable to every abstract meaning representation that provides links from concepts to WordNet synsets.
|
In this paper we presented an unsupervised algorithm for lexical choice from WordNet synsets called Ksel that exploits the WordNet hierarchy of hypernyms/hyponyms to produce the most appropriate lemma for a given synset. Ksel performs better than an already high baseline based on the frequency of lemmas in an annotated corpus. The future direction of this work is at least twofold. On the one hand, being based purely on a lexical resource, the Ksel approach lends itself nicely to be applied to different languages by leveraging multi-lingual resources like BabelNet (Navigli and Ponzetto, 2012). On the other hand, we want to exploit existing annotated corpora such as the GMB to solve the lexical choice problem in a supervised fashion, that is, ranking candidate lemmas based on features of the semantic structure, in the same track of our previous work on generation from work-aligned logical forms (Basile and Bos, 2013).
| 7 |
Lexical and Semantic Resources and Analysis
|
11_2014
| 2,014 |
Andrea Bellandi, Davide Albanesi, Alessia Bellusci, Andrea Bozzi, Emiliano Giovannetti
|
The Talmud System: a Collaborative Web Application for the Translation of the Babylonian Talmud Into Italian
|
ENG
| 5 | 1 | 0 |
CNR-ILC
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
In this paper we introduce the Talmud System, a collaborative web ap-plication for the translation of the Baby-lonian Talmud into Italian. The system we are developing in the context of the “Pro-getto Traduzione del Talmud Babilonese” has been designed to improve the expe-rience of collaborative translation using Computer-Assisted Translation technolo-gies and providing a rich environment for the creation of comments and the annota-tion of text on a linguistic and semantic ba-sis.
|
Alongside the Bible, the Babylonian Talmud (BT) is the Jewish text that has mostly influenced Jew-ish life and thought over the last two millennia. The BT corresponds to the effort of late antique scholars (Amoraim) to provide an exegesis of the Mishnah, an earlier rabbinic legal compilation, di-vided in six “orders” (sedarim) corresponding to different categories of Jewish law, with a total of 63 tractates (massekhtaot). Although following the inner structure of the Mishnah, the BT dis-cusses only 37 tractates, with a total of 2711 dou-ble sided folia in the printed edition (Vilna, XIX century). The BT is a comprehensive literary cre-ation, which went through an intricate process of oral and written transmission, was expanded in every generations before its final redaction, and has been the object of explanatory commentaries and reflexions from the Medieval Era onwards. In its long history of formulation, interpretation, transmission and study, the BT reflects inner de-velopments within the Jewish tradition as well as the interactions between Judaism and the cultures with which the Jews came into contact (Strack and Stemberger, 1996). In the past decades, online resources for studying Rabbinic literature have considerably increased and several digital collec-tions of Talmudic texts and manuscripts are nowa-days available (Lerner, 2010). Particularly, schol-ars as well as a larger public of users can bene-fit from several new computing technologies ap-plied to the research and the study of the BT, such as (i.) HTML (Segal, 2006), (ii.) optical char-acter recognition, (iii.) three-dimensional com-puter graphics (Small, 1999), (iv.) text encod-ing, text and data mining (v.) image recognition (Wolf et al., 2011(a); Wolf et al., 2011(b); Shweka et al., 2013), and (vi.) computer-supported learning environments (Klamma et al., 2005; Klamma et al., 2002). In the context of the “Progetto Traduzione del Talmud Babilonese”, the Institute for Computational Linguistics of the Italian National Research Council (ILC-CNR) is in charge of developing a collaborative Java-EE web application for the translation of the BT into Italian by a team of translators. The Talmud System (TS) already includes Computer-Assisted Translation (CAT), Knowledge Engineering and Digital Philology tools, and, in future versions, will include Natural Language Processing tools for Hebrew/Aramaic, each of which will be outlined in detail in the next Sections.
|
We here introduced the Talmud System, a collaborative web application for the translation of the Babylonian Talmud into Italian integrating technologies belonging to the areas of (i.) ComputerAssisted Translation, (ii.) Digital Philology, (iii.) Knowledge Engineering and (iv.) Natural Language Processing. Through the enhancement of the already integrated components (i., ii., iii.) and the inclusion of new ones (iv.) the TS will allow, in addition to the improvement of the quality and pace of the translation, to provide a multi-layered navigation (linguistic, philological and semantic) of the translated text (Bellandi et al., 2014(c)).
| 7 |
Lexical and Semantic Resources and Analysis
|
12_2014
| 2,014 |
Alessia Bellusci, Andrea Bellandi, Giulia Benotto, Amedeo Cappelli, Emiliano Giovannetti, Simone Marchi
|
Towards a Decision Support System for Text Intepretation
|
ENG
| 6 | 2 | 1 |
CNR-ILC
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
This article illustrates the first steps towards the implementation of a De-cision Support System aimed to recreate a research environment for scholars and pro-vide them with computational tools to as-sist in the processing and interpretation of texts. While outlining the general charac-teristics of the system, the paper presen-ts a minimal set of user requirements and provides a possible use case on Dante’s Inferno.
|
A text represents a multifaceted object, resulting from the intersection of different expressive layers (graphemic, phonetic, syntactic, lexico-semantic, ontological, etc.). A text is always created by a writer with a specific attempt to outline a certain subject in a particular way. Even when it is not a literary creation, a given text follows its wri-ter’s specific intention and is written in a distinct form. The text creator’s intention is not always self-evident and, even when it is, a written piece might convey very different meanings proportio-nally to the various readers analysing it. Texts can be seen, in fact, as communication media between writers and readers. Regardless of the epistemolo-gical theory about where meaning emerges in the reader-text relationship (Objectivism, Constructi-vism, Subjectivism), a text needs a reader as much as a writer to be expressive (Chandler, 1995). The reader goes beyond the explicit information given in the text, by making certain inferences and eva-luations, according to his/her background, expe-rience, knowledge and purpose. Therefore, inter-pretation depends on both the nature of the given text and the reader/interpreter; it can be understood as the goal, the process and the outcome of the analytic activity conducted by a certain reader on a given text under specific circumstances. Interpretation corresponds to the different– virtually infinite– mental frameworks and cognitive mechanisms activated in a certain reader/interpreter when examining a given text. The nature of the interpretation of a given text can be philological, historical, psychological, etc.; a psychological interpretation can be Freudian, Jungian, etc... Furthermore, the different categories of literary criticism and the various interpretative approaches might be very much blurred and intertwined, i.e. an historical interpretation might involve philological, anthropological, political and religious analyses. While scholars are generally aware of their mental process of selection and categorization when reading/interpreting a text and, thus, can re-adjust their interpretative approach while they operate, an automatic system has often proved unf it for qualitative analysis due to the complexity of text meaning and text interpretation (Harnad, 1990). Nevertheless, a few semi-automatic systems for qualitative interpretation have been proposed in the last decades. The most outstanding of them is ATLAS.ti, a commercial system for qualitative analysis of unstructured data, which has been applied in the early nineties to text interpretation (Muhr, 1991). ATLAS.ti, however, appears too general to respond to the articulated needs of a scholar studying a text, lacking of advanced text analysis tools and automatic knowledge extraction features. The University of Southampton and Birkbeck University are currently working on a commercial project, SAMTLA1, aimed to create a language-agnostic research environment for studying textual corpora with the aid of computational technologies. In the past, concerning the interpretation of literary texts, the introduction of text annotation approaches and the adoption of high-level markup languages allowed to go beyond the typical use of concordances (DeVuyst, 1990; Sutherland, 1990; Sperberg-Mc Queen and Burnard, 1994). In this context, several works have been proposed for the study of Dante’s Commedia. One of the first works involved the definition of a meta representation of the text of the Inferno and the construction of an ontology formalizing a portion of Dante’s Commedia’s world (Cappelli et al., 2002). Data mining procedures able to conceptually query the aforementioned resources have also been implemented (Baglioni et al., 2004). Among the other works on Dante we cite The World of Dante (Parker, 2001), Digital Dante of the Columbia University (LeLoup and Ponterio, 2006) and the Princeton Dante Project (Hollander, 2013). A “multidimensional” social network of characters, places and events of Dante’s Inferno have been constructed to make evident the innermost structure of the text (Cappelli et al., 2011) by leveraging on the expressive power of graph representations of data (Newman, 2003; Newmanetal., 2006; Easley and Kleinberg, 2010; Meirelles, 2013). A touch table approach to Dante’s Inferno, based on the same social network representation, has been also implemented (Bordin et al., 2013). More recently, a semantic network of Dante’s works has been developed alongside a RDF representation of the knowledge embedded in them (Tavoni et al., 2014). Other works involving text interpretation and graph representations have been carried out on other literary texts, such as Alice in Wonderland (Agarwal et al., 2012) and Promessi Sposi (Bolioli et al., 2013). As discussed by semiologists, linguists and literary scholars (Eco, 1979; Todorov, 1973; Segre, 1985; Roque, 2012) the interpretation of a text may require a complex structuring and interrelation of the information belonging to its different expressive layers. The Decision Support System (DSS) we here introduce aims to assist scholars in their research projects, by providing them with semi-automatic tools specifically developed to support the interpretation of texts at different and combined layers. Wechose to start from the analysis of literary texts to be able to face the most challenging aspects related to text interpretation. This work is the third of a series describing the progressive development of the general approach: for the others refer to (Bellandi et al., 2013; Bellandi et al., 2014). In what follows, we describe the general characteristics of the DSS we plan to develop accompanied by aminimal set of user requirements (2.), we present a possible scenario, in which the system can be applied (3.), and we provide some conclusive notes (4.).
|
In this work, we presented our vision of a Decision Support System for the analysis and interpretation of texts. In addition to outlining the general characteristics of the system, we illustrated a case study on Dante’s Inferno showing how the study of a text can involve elements belonging to three different layers (ontological, dialogical and terminological) thus allowing to take into account, in an innovative way, both textual and contextual elements. The next steps will consist in the extension of the user requirements and the design of the main components of the system. We plan to start with the basic features allowing a user to create a project and upload documents and then provide the minimal text processing tools necessary for the definition and management of (at least) the graphemic layer.
| 5 |
Latin Resources
|
13_2014
| 2,014 |
Luisa Bentivogli, Bernardo Magnini
|
An Italian Dataset of Textual Entailment Graphs for Text Exploration of Customer Interactions
|
ENG
| 2 | 1 | 1 |
Fondazione Bruno Kessler
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Trento
|
This paper reports on the con-struction of a dataset of textual entailment graphs for Italian, derived from a corpus of real customer interactions. Textual en-tailment graphs capture relevant semantic relations among text fragments, including equivalence and entailment, and are pro-posed as an informative and compact rep-resentation for a variety of text exploration applications.
|
Given the large production and availability of tex-tual data in several contexts, there is an increasing need for representations of such data that are able at the same time to convey the relevant informa-tion contained in the data and to allow compact and efficient text exploration. As an example, cus-tomer interaction analytics requires tools that al-low for a fine-grained analysis of the customers’ messages (e.g. complaining about a particular as-pect of a particular service or product) and, at the same time, allow to speed up the search process, which commonly involves a huge amount of in-teractions, on different channels (e.g. telephone calls, emails, posts on social media), and in differ-ent languages. A relevant proposal in this direction has been the definition of textual entailment graphs (Berant et al., 2010), where graph nodes represent predi-cates (e.g. marry(x, y)), and edges represent the entailment relations between pairs of predicates. This recent research line in Computational Lin-guistics capitalizes on results obtained in the last ten years in the field of Recognizing Textual En-tailment (Dagan et al., 2009), where a successful series of shared tasks have been organized to show and evaluate the ability of systems to draw text-to-text semantic inferences. In this paper we present a linguistic resource consisting of a collection of textual entailment graphs derived from real customer interactions in Italian social fora, which is our motivating sce-nario. We extend the earlier, predicate-based, vari-ant of entailment graphs to capture entailment re-lations among more complex text fragments. The resource is meant to be used both for training and evaluating systems that can automatically build entailment graphs from a stream of customer in-teractions. Then, entailment graphs are used to browse large amount of interactions by call cen-ter managers, who can efficiently monitor the main reasons for customers’ calls. We present the methodology for the creation of the dataset as well as statistics about the collected data. This work has been carried out in the context of the EXCITEMENT project1, in which a large European consortium aims at developing a shared software infrastructure for textual inferences, i.e. the EXCITEMENT Open Platform2 (Padó et al., 2014; Magnini et al., 2014), and at experimenting new technology (i.e. entailment graphs) for customer interaction analytics.
|
We have presented a new linguistic resource for Italian, based on textual entailment graphs derived from real customer interactions. We see a twofold role of this resource: (i) on one side it provides empirical evidences of the important role of semantic relations and provides insights for new developments of the textual entailment framework; (ii) on the other side, a corpus of textual entailment graphs is crucial for the realization and evaluation of automatic systems that can build entailment graphs for concrete application scenarios.
| 7 |
Lexical and Semantic Resources and Analysis
|
14_2014
| 2,014 |
Lorenzo Bernardini, Irina Prodanof
|
L'integrazione di informazioni contestuali e linguistiche nel riconoscimento automatico dell'ironia
|
ITA
| 2 | 1 | 0 |
Università di Pavia
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pavia
|
Verbal is a highly complex rhetorical figure that belongs to the pragmatic level of the language. So far, however, all computing attempts aimed at the automatic recognition of irony have limited themselves to searching for linguistic indications that could indicate their presence without considering pragmatic and contextual factors. In this work, we tried to evaluate the possibility of integrating simple computing contextual factors with language-type information in order to improve the effectiveness of automatic irony recognition systems in the comments of online newspapers.
|
Verbal irony is a very complex rhetoric figure that is placed at the pragmatic level of the language. As far as an ironist can use phonological, prosodial, morphological, lessical, synthetic and semantic elements to produce irony, this latter is not an internal property of the enunciated itself and is not determined by its formal characteristics. The irony is rather an interpretative phenomenon linked to the expectations that a listener develops regarding the author’s intentions of an enunciated product in a specific context starting from an extinct set of encyclopedic and contextual information.
|
This work has been presented with the possibility of using contextual information to automatically identify irony in the comments of the usual readers of online newspapers. For this purpose, a possible computing approach was proposed to identify the more ironic commentators of a community, suggesting a different treatment of the linguistic material between them and the other commentators. The integration of contextual information and language information could have a positive impact on the effectiveness of automatic irony recognition systems, which would have an important role in the field of Sentiment Analysis. Currently we are expanding research by evaluating the influence of information such as the type of newspaper, the topic of the news and the length of the comment on a broader and built on multiple newspapers comment corpus. Obviously, an integration of contextual information as basic would not completely solve the problem of how to automatically identify irony in online texts. However, this work reflects the firm belief that these progressive attempts to integrate simple and computable contextual information with linguistic information are today the best way to go to attempt to automatically address phenomena of pragmatic nature as “complex and disfaced” as irony.
| 6 |
Sentiment, Emotion, Irony, Hate
|
15_2014
| 2,014 |
Brigitte Bigi, Caterina Petrone
|
A generic tool for the automatic syllabification of Italian
|
ENG
| 2 | 2 | 1 |
CNRS, Aix-Marseille Université
| 1 | 1 | 1 | 2 |
Brigitte Bigi, Caterina Petrone
| 0 |
0
|
France
|
Marseille
|
This paper presents a rule-based automatic syllabification for Italian. Dif-ferently from previously proposed syllab-ifiers, our approach is more user-friendly since the Python algorithm includes both a Command-Line User and a Graphical User interfaces. Moreover, phonemes, classes and rules are listed in an external configuration file of the tool which can be easily modified by any user. Syllabifica-tion performance is consistent with man-ual annotation. This algorithm is included in SPPAS, a software for automatic speech segmentation, and distributed under the terms of the GPL license.
|
This paper presents an approach to automatic de-tection of syllable boundaries for Italian speech. This syllabifier makes use of the phonetized text. The syllable is credited as a linguistic unit con-ditioning both segmental (e.g., consonant or vowel lengthening) and prosodic phonology (e.g., tune-text association, rhythmical alternations) and its automatic annotation represent a valuable tool for quantitative analyses of large speech data sets. While the phonological structure of the syllable is similar across different languages, phonolog-ical and phonotactic rules of syllabification are language-specific. Automatic approaches to syl-lable detection have thus to incorporate such con-straints to precisely locate syllable boundaries. The question then arises of how to obtain an ac-ceptable syllabification for a particular language and for a specific corpus (a list of words, a writ-ten text or an oral corpus of more or less casual speech). In the state-of-the-art, the syllabification can be made directly from a text file as in (Cioni, 1997), or directly from the speech signal as in (Petrillo and Cutugno, 2003). There are two broad approaches to the prob-lem of the automatic syllabification: a rule-based approach and a data-driven approach. The rule-based method effectively embodies some theoret-ical position regarding the syllable, whereas the data-driven paradigm tries to infer new syllabifica-tions from examples syllabified by human experts. In (Adsett et al., 2009), three rule-based automatic systems and two data-driven automatic systems (Syllabification by Analogy and the Look-Up Pro-cedure) are compared to syllabify a lexicon. Indeed, (Cioni, 1997) proposed an algorithm for the syllabification of written texts in Italian, by syllabifying words directly from a text. It is an algorithm of deterministic type and it is based upon the use of recursion and of binary tree in order to detect the boundaries of the syllables within each word. The outcome of the algorithm is the production of the so-called canonical syllabification (the stream of syllabified words). On the other side, (Petrillo and Cutugno, 2003) presented an algorithm for speech syllabification directly using the audio signal for both English and Italian. The algorithm is based on the detection of the most relevant energy maxima, using two different energy calculations: the former from the original signal, the latter from a low-pass filtered version. This method allows to perform the syllabification with the audio signal only, so without any lexical information. More recently, (Iacoponi and Savy, 2011) developed a complete rule-based syllabifier for Italian (named Sylli) that works on phonemic texts. The rules are then based on phonological principles. The system is composed of two transducers (one for the input and one for the output), the syllabification algorithm and the mapping list (i.e., the vocabulary). The two transducers convert the twodimensional linear input to a three-dimensional phonological form that is necessary for the processing in the phonological module and then sends the phonological form back into a linear string for output printing. The system achieved good performances compared to a manual syllabification: more than 0.98.5% (syllabification of spoken words). This system is distributed as a package written in C language and must be compiled; the program is an interactive test program that is used in command-line mode. After the program reads in the phone set definition and syllable structure parameters, it loops asking for the user to type in a phonetic transcription, calculating syllable boundaries for it, and then displaying them. When the user types in a null string, the cycling stops and execution ends. Finally, there are two main limitations: this tool is only dedicated to computer scientists, and it does not support time-aligned input data. With respect to these already existing approaches and/or systems, the novel aspect of the work reported in this paper is as follows: to propose a generic and easy-to-use tool to identify syllabic segments from phonemes; to propose a generic algorithm, then a set of rules for the particular context of Italian spontaneous speech. In this context, ”generic” means that the phone set, the classes and the rules are easily changeable; and ”easy-to-use” means that the system can be used by any user.
|
The paper presented a new feature of the SPPAS tool that lets the user provide syllabification rules and perform automatic segmentation by means of a well-designed graphical user interface. The sys-tem is mainly dedicated to linguists that would like to design and test their own set of rules. A man-ual verification of the output of the program con-firmed the accuracy of the proposed set of rules for syllabification of dialogues. Furthermore, the rules or the list of phonemes can be easily modi-fied by any user. Possible uses of the program in-clude speech corpus syllabification, dictionary syl-labification, and quantitative syllable analysis.
| 13 |
Multimodal
|
16_2014
| 2,014 |
Andrea Bolioli, Eleonora Marchioni, Raffaella Ventaglio
|
Errori di OCR e riconoscimento di entità nell'Archivio Storico de La Stampa
|
ITA
| 3 | 2 | 0 |
CELI Language Technology
| 1 | 0 | 0 | 0 |
0
| 3 |
Andrea Bolioli, Eleonora Marchioni, Raffaella Ventaglio
|
Italy
|
Turin
|
In this article we present the project of recognition of entity mentions carried out for the Historic Archive of the Press and a brief analysis of the OCR errors found in the documents. The automatic recording was carried out on about 5 million articles, in editions from 1910 to 2005.
|
In this article we will synthesically describe the project of automatic notification of entity references carried out on the documents of the Historic Archive of the Press, i.e. the au-tomatic recognition of the references of people, entities and organizations (the Ónamed entitiesÓ) carried out on about 5 million articles of the quo-day, followed by the wider project of digitalization of the Historic Archive. 1 Although the project dates back to a few years ago (2011), we think it may be of interest in 1As you read on the website of the Historical Archive (www.archiviolastampa.it), ÓThe project of digitalization of the Historical Archive The Press was carried out by the Comi-tato for the Library of Journalistic Information (CB-DIG) promoted by the Piemonte Region, the San Paolo Company, the CRT Foundation and the publisher The Press, with the aim of creating an online database intended for public consultation and accessible for free. It was the first project of digitalization of the entire historic archive of an Italian-Ian newspaper, and one of the first international projects of an-notation automatic of an entire archive. In 2008, the New York Times had released a recorded cor-pus containing about 1.8 million articles from 1987 to 2007 (New York Times Annotated Corpus, 2008), which had been manually recorded people, organizations, places and other relevant information using con-trolled vocabulary. The Historic Archive of the Press contains a total of 1,761,000 digitalized pages, for a total of over 12 million articles, of various publications (The Press, Press Sera, Tuttolibri, Tuttoscienze, etc.From 1867 to 2005. The automatic recognition of the entity is limited to the articles of the La Prampa successive to 1910, identified as such by the presence of a title, i.e. approximately 4,800,000 documents. The announcement of the mentions in the articles enables us to analyze the co-need between the entity and other linguistic data, their time movements, and the generation of infographics, which we cannot deepen in this article. In Figure 1, we only show as an example the chart of the people most cited in the newspaper articles over the decades. In the rest of the article we briefly present an analysis of the OCR errors present in transcriptions, before describing the procedures taken for the automatic recognition of the mentions and the results obtained.
|
In this short article we mentioned some of the methodologies and issues of the automatic notification project of 5 million articles of the Historic Archive of the Press. We have some difficulties related to the presence of considerable OCR errors and the width and variety of the archive (the entire archive goes from 1867 to 2005). These problems could-be addressed positively using information and methodologies that we have little to experience in this project, such as at. The crowdsourcing.ori
| 7 |
Lexical and Semantic Resources and Analysis
|
17_2014
| 2,014 |
Federico Boschetti, Riccardo Del Gratta, Marion Lamé
|
Computer Assisted Annotation of Themes and Motifs in Ancient Greek Epigrams: First Steps
|
ENG
| 3 | 1 | 0 |
CNR-ILC
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
This paper aims at illustrating some tools to assist the manual annotation of themes and motifs in literary and epi-graphic epigrams for the PRIN 2010/2011 Memorata Poetis Project.
|
The Memorata Poetis Project is a national funded project (PRIN 2010/2011), led by Professor Paolo Mastandrea, “Ca’ Foscari” University of Venice, in continuity with the Musisque Deoque Project (Mastandrea and Spinazzè, 2011). It aims at the study of the intertextuality between epigraphic and literary epigrams in Greek, Latin, Arabic and Ital-ian languages. Some of those epigrams are trans-lated in more languages. Currently the access to the website (http://memoratapoetis.it) is restricted to the project workgroups but the access will be public before the end of the project, i.e. February 2016. To understand the specific goal of this work in progress, a broader presentation of the project is necessary. Epigrams are short poems and follow specific schemes, contents and structures. Those short poems are transmitted both by epigraphs and by manuscripts, with interesting relations between the different traditions: an epigram can have been copied from stone to parchment, losing its original function and contextualization or, on the contrary, a literary epigram can have been adapted to a new epigraphic situation. As inscription, epigrams are a communication device inserted in a cultural con-struct. They are part of an information system and this implies, in addition to texts and their linguis-tics aspects: writings, contexts and iconotextual relationships. This holistic and systemic construc-tion creates meanings: in Antiquity and in Middle-Ages, for instance, epigrams, as inscriptions, were often epitaphs. Intertextuality also takes into account this rela-tion between images of the context and the epi-grams. For instance, “fountain” is a redundant motive in epigrams. An epigram that refers to divinities of water could be inscribed on a foun-tain. Such epigraphic situation participates to the global meaning. It helps to study the original audience and the transmission of epigrams. The reuse of themes and motifs illustrates how au-thors work and may influence other authors. From epigraphs to modern edition of epigrams, intertex-tuality draws the movement of languages and con-cepts across the history of epigrams. Here is an example of a poetic English translation of a Theocritus’ epigram: XV. [For a Tripod Erected by Damoteles to Bacchus] The precentor Damoteles, Bacchus, exalts / Your tripod, and, sweetest of deities, you. / He was champion of men, if his boyhood had faults; / And he ever loved honour and seemliness too. (transl. by Calverly, 1892, https: //archive.org/details/Theocritus/ TranslatedIntoEnglishVerseByC.s. Calverley) Effectively, European cultures enjoyed epigrams since the Antiquity, copied them, translated them, and epigrams became a genre that philology studies ardently. This intercultural process transforms epigrams and, at the same time, tries to keep their essence identifiable in those themes and motifs. Naturally, those themes and motifs, such as “braveness”, “pain”, “love” or more concretely “rose”, “shield”, “bee” are reflecting the concepts in use in several different languages. The Memorata Poetis Project tries to capture metrical, lexical and semantic relations among the document of this heterogeneous multilingual corpus. The study of intertextuality is important to understand the transmission of knowledge from author to author, from epoch to epoch, or from civilization to civilization. Even if the mechanisms of the transmission are not explicit, traces can be found through allusions, or thematic similarities. If the same themes are expressed through the same motif(s), probably there is a relation between the civilizations, which express this concept in a literary form, independently by the language in which it is expressed. For instance, the concept of the shortness of life and the necessity to enjoy this short time is expressed both in Greek and Latin literature: Anthologia Graeca 11, 56 PØne kaÈ eÎfraÐnou. tÐ gr aÖrion « tÐ tä mèllon, / oÎdeÈc gin¸skei. (transl.: Drink and be happy. Nobody knows how will be tomorrow or the future.) Catullus, carmina, 5 Viuamus, mea Lesbia, atque amemus / ... / Nobis cum semel occidit breuis lux, / Nox est perpetua una dormienda. (transl.: Let us live and love, my Lesbia [...] when our short light has set, we have to sleep a never ending night.) Whereas other units are working on Greek, Latin, and Italian texts, the ILC-CNR unit of the project currently has in charge the semantic annotation of a small part of the Greek and of all the Arabic texts and it is developing computational tools to assist the manual annotation, in order to suggest the most suitable tags that identify themes and motifs. The semantic annotation of literary and historical texts in collaborative environments is a relevant topic in the age of the Semantic Web. At least two approaches are possible: a top-down approach, in which an ontology or a predefined taxonomy is used for the annotation, and a bottomup approach, in which the text can be annotated with unstructured tags that will be organized in a second stage of the work. By combining these approaches, it is possible to collect more evidence to establish agreement on the annotated texts.
|
In conclusion, we have presented a work in progress related to the lexico-semantic instru-ments under development at the ILC-CNR to as-sist the annotators that collaborate to the Memo-rata Poetis Project.
| 5 |
Latin Resources
|
18_2014
| 2,014 |
Dominique Brunato, Felice Dell'orletta, Giulia Venturi, Simonetta Montemagni
|
Defining an annotation scheme with a view to automatic text simplification
|
ENG
| 4 | 3 | 1 |
CNR-ILC
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
This paper presents the preliminary steps of ongoing research in the field of au- tomatic text simplification. In line with cur- rent approaches, we propose here a new an- notation scheme specifically conceived to identify the typologies of changes an original sentence undergoes when it is manually sim- plified. Such a scheme has been tested on a parallel corpus available for Italian, which we have first aligned at sentence level and then annotated with simplification rules.
|
Automatic Text Simplification (ATS) as a field of research in NLP is receiving growing attention over the last few years due to the implications it has for both machine- and human-oriented tasks. For what concerns the former, ATS has been employed as a pre-processing step, which pro- vides an input that is easier to be analyzed by NLP modules, so that to improve the efficiency of, e.g., parsing, machine translation and infor- mation extraction. For what concerns the latter, ATS can also play a crucial role in educational and assistive technologies; e.g., it is used for the creation of texts adapted to the needs of particular readers, like children (De Belder and Moens, 2010), L2 learners (Petersen and Ostendorf, 2007), people with low literacy skills (Aluìsio et al., 2008), cognitive disabilities (Bott and Saggion, 2014) or language impairments, such as aphasia (Carroll et al., 1998) or deafness (Inui et al., 2003). From the methodological point of view, while the first attempts were mainly developed on a set of predefined rules based on linguistic intuitions (Chandrasekar et al., 1996; Siddharthan, 2002), current ones are much more prone to adopt datadriven approaches. Within the latter paradigm, the availability of monolingual parallel corpora (i.e. corpora of authentic texts and their manually simplified versions) turned out to be a necessary prerequisite, as they allow for investigating the actual editing operations human experts perform on a text in the attempt to make it more comprehensible for their target readership. This is the case of Brouwers et al. (2014) for French; Bott and Saggion (2014) for Spanish; Klerke and Søgaard (2012) for Danish and Caseli et al. (2009) for Brazilian Portuguese. To our knowledge, only a parallel corpus exists for Italian which was developed within the EU project Terence, aimed at the creation of suitable reading materials for poor comprehenders (both hearing and deaf, aged 7-11)1. An excerpt of this corpus was used for testing purposes by Barlacchi and Tonelli (2013), who devised the first rule-based system for ATS in Italian focusing on a limited set of linguistic structures. The approach proposed in this paper is inspired to the recent work of Bott and Saggion (2014) for Spanish and differs from the work of Barlacchi and Tonelli (2013) since it aims at learning from a parallel corpus the variety of text adaptations that characterize manual simplification. In particular, we focus on the design and development of a new annotation scheme for the Italian language intended to cover a wide set of linguistic phenomena implied in text simplification.
|
We have illustrated the first annotation scheme for Italian that includes a wide set of simplification rules spanning across different levels of linguistic description. The scheme was used to annotate the only existing Italian parallel corpus. We believe such a resource will give valuable insights into human text simplification and create the prerequisites for automatic text simplification. Current developments are devoted to refine the annotation scheme, on the basis of a qualitative and quantitative analysis of the annotation results; we are also testing the suitability of the annotation scheme with respect to other corpora we are also gathering in a parallel fashion. Based on the statistical findings on the productivity of each rule, we will investigate whether and in which way certain combinations of rules affect the distribution of multi-leveled linguistic features between the original and the simplified texts. In addition, we intend to explore the relation between text simplification and a related task, i.e. readability assessment, with the aim of comparing the effects of such combinations of rules on the readability scores.
| 11 |
Text Simplification
|
19_2014
| 2,014 |
Tommaso Caselli, Isabella Chiari, Aldo Gangemi, Elisabetta Jezek, Alessandro Oltramari, Guido Vetere, Laure Vieu, Fabio Massimo Zanzotto
|
Senso Comune as a Knowledge Base of Italian language: the Resource and its Development
|
ENG
| 8 | 3 | 0 |
VU Amsterdam, Sapienza Università di Roma, CNR-ISTC, Università di Pavia, Carnegie Mellon University, IBM Italia, CNRS, Università di Roma Tor Vergata
| 8 | 1 | 1 | 3 |
Tommaso Caselli, Alessandro Oltramari, Laure Vieu
| 1 |
Guido Vetere
|
Netherlands, Pennsylvania (USA)
|
Amsterdam, Pittsburgh
|
Senso Comune is a linguistic knowledge base for the Italian Language, which accommodates the content of a legacy dictionary in a rich formal model. The model is implemented in a platform which allows a community of contributors to enrich the resource. We provide here an overview of the main project features, including the lexical-ontology model, the process of sense classification, and the an-notation of meaning definitions (glosses) and lexicographic examples. Also, we will illustrate the latest work of alignment with MultiWordNet, to illustrate the method-ologies that have been experimented with, to share some preliminary result, and to highlight some remarkable findings about the semantic coverage of the two re-sources.
|
Senso Comune1 is an open, machine-readable knowledge base of the Italian language. The lex-ical content has been extracted from a monolin-gual Italian dictionary2, and is continuously en-riched through a collaborative online platform. The knowledge base is freely distributed. Senso Comune linguistic knowledge consists in a struc-tured lexicographic model, where senses can be qualified with respect to a small set of ontologi-cal categories. Senso Comune’s senses can be fur-ther enriched in many ways and mapped to other dictionaries, such as the Italian version of Mul-tiWordnet, thus qualifying as a linguistic Linked Open Data resource. 1.1 General principles The Senso Comune initiative embraces a num-ber of basic principles. First of all, in the era of user generated content, lexicography should be able to build on the direct witness of native speak-ers. Thus, the project views at linguistic knowl-edge acquisition in a way that goes beyond the ex-ploitation of textual sources. Another important assumption is about the relationship between lan-guage and ontology (sec. 2.1). The correspon-dence between linguistic meanings, as they are listed in dictionaries, and ontological categories, is not direct (if any), but rather tangential. Lin-guistic senses commit to the existence of various kinds of entities, but should not be in general con-fused with (and collapsed to) logical predicates directly interpretable on these entities. Finally, we believe that, like the language itself, linguistic knowledge should be owned by the entire commu-nity of speakers, thus they are committed to keep the resource open and fully available.
|
In this paper, we have introduced Senso Comune as an open cooperative knowledge base of Italian language, and discussed the issue of its alignment with other linguistic resources, such as WordNet. Experiments of automatic and manual alignment with the Italian MultiWordNet have shown that the gap between a native Italian dictionary and a WordNet-based linguistic resource may be rele-vant, both in terms of coverage and granularity. While this finding is in line with classic semiology (e.g. De Saussure’s principle of arbitrariness), it suggests that more attention should be paid to the semantic peculiarity of each language, i.e. the specific way each language constructs a concep-tual view of the World. One of the major features of Senso Comune is the way linguistic senses and ontological concepts are put into relation. Instead of equalising senses to concepts, a formal relation of ontological commitment is adopted, which weakens the ontological import of the lexicon. Part of our future research will be dedicated to leverage on this as an enabling feature for the integration of different lexical resources, both across and within national languages.
| 7 |
Lexical and Semantic Resources and Analysis
|
20_2014
| 2,014 |
Fabio Celli, Giuseppe Riccardi
|
CorEA: Italian News Corpus with Emotions and Agreement
|
ENG
| 2 | 0 | 0 |
Università di Trento
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Trento
|
In this paper, we describe an Italian corpus of news blogs, including bloggers’ emotion tags, and annotations of agreement relations amongst blogger-comment pairs. The main contributions of this work are: the formalization of the agreement relation, the design of guide-lines for its annotation, the quantitative analysis of the annotators’ agreement.
|
Online news media, such as journals and blogs, allow people to comment news articles, to express their own opinions and to debate about a wide va-riety of different topics, from politics to gossips. In this scenario, commenters express approval and dislike about topics, other users and articles, ei-ther in a linguistic form and/or using like pre-coded actions (e.g. like buttons). Corriere is one of the most visited Italian news websites, attract-ing over 1.6 million readers everyday1. The pe-culiarity of corriere.it with respect to most news websites, is that it contains metadata on emotions expressed by the readers about the articles. The emotions (amused, satisfied, sad, preoccupied and 1source ’http://en.wikipedia.org/wiki/Corriere della Sera’ retrieved in Jan 2014.indignated) are annotated directly by the readers on a voluntary basis. They can express one emo-tion per article. In this paper, we describe the col-lection of a corpus from corriere.it, that combines emotions and agreement/disagreement. The paper is structured as follows: in section 2 we will provide an overview of related work, in sections 3 and 4 we will define the agree-ment/disagreement relation, describe the corpus, comparing it to related work, and provide the an-notation guidelines. In section 5 we will draw some conclusions.
|
We presented the CorEA corpus, a resource that combines agreement/disagreement at message level and emotions at participant level. We are not aware of any other resource of this type for Ital-ian. We found that the best way to annotate agree-ment/disagreement is with binary classes, filtering out “NA” and neutral cases. In the future, we would like to annotate CorEA at topic level and develop classifiers for agree-ment/disagreement. We plan to make available the corpus at the end of the project.
| 6 |
Sentiment, Emotion, Irony, Hate
|
21_2014
| 2,014 |
Alessandra Cervone, Peter Bell, Silvia Pareti, Irina Prodanof, Tommaso Caselli
|
Detecting Attribution Relations in Speech: a Corpus Study
|
ENG
| 5 | 3 | 1 |
Università di Pavia, University of Edinburgh, Google Inc., Trento RISE
| 4 | 1 | 1 | 2 |
Peter Bell, Silvia Pareti
| 1 |
Silvia Pareti
|
United Kingdom, California (USA), Italy
|
Edinburgh, Mountain View, Pavia, Trento
|
In this work we present a methodology for the annotation of Attri-bution Relations (ARs) in speech which we apply to create a pilot corpus of spo-ken informal dialogues. This represents the first step towards the creation of a re-source for the analysis of ARs in speech and the development of automatic extrac-tion systems. Despite its relevance for speech recognition systems and spoken language understanding, the relation hold-ing between quotations and opinions and their source has been studied and extracted only in written corpora, characterized by a formal register (news, literature, scientific articles). The shift to the informal register and to a spoken corpus widens our view of this relation and poses new challenges. Our hypothesis is that the decreased relia-bility of the linguistic cues found for writ-ten corpora in the fragmented structure of speech could be overcome by including prosodic clues in the system. The analysis of SARC confirms the hypothesis show-ing the crucial role played by the acous-tic level in providing the missing lexical clues.
|
1 Introduction Our everyday conversations are populated by other people’s words, thoughts and opinions. Detect-ing quotations in speech represents the key to “one of the most widespread and fundamental topics of human speech” (Bakhtin, 1981, p. 337). A system able to automatically extract a quo-tation and attribute it to its truthful author from speech would be crucial for many applications. Besides Information Extraction systems aimed at processing spoken documents, it could be useful for Speaker Identification systems, (e.g. the strat-egy of emulating the voice of the reported speaker in quotations could be misunderstood by the sys-tem as a change of speaker). Furthermore, attri-bution extraction could also improve the perfor-mance of Dialogue parsing, Named-Entity Recog-nition and Speech Synthesis tools. On a more ba-sic level, recognizing citations from speech could be useful for sentence boundaries automatic detec-tion systems, where quotations, being sentences embedded in other sentences, could be a source of confusion. So far, however, attribution extraction systems have been developed only for written corpora. Extracting the text span corresponding to quotations and opinions and ascribing it to their proper source within a text means to reconstruct the Attribution Relations (ARs, henceforth) holding between three constitutive elements (following Pareti (2012)): the Source the Cue, i.e. the lexical anchor of the AR (e.g. say, announce, idea) the Content (1) This morning [Source John] [Cue told] me: [Content ”It’s important to support our leader. I trust him.”]. In the past few years ARs extraction has attracted growing attention in NLP for its many potential applications (e.g.Information Extraction, Opinion Mining) while remaining an open challenge. Automatically identifying ARs from a text is a complex task, in particular due to the wide range of syntactic structures that the relation can assume and the lack of a dedicated encoding in the language. While the content boundaries of a direct quotation are explicitly marked by quotation markers, opinions and indirect quotations only partially have syntactically, albeit blurred, boundaries as they can span intersententially. The subtask of identifying the presence of an AR could be tackled with more success by exploiting the presence of the cue as a lexical anchor establishing the links to source and content spans. For this reason, cues are the starting point or a fundamental feature of extraction systems (Pareti et al., 2013; Sarmento and Nunes, 2009; Krestel, 2007). In our previous work (Pareti and Prodanof, 2010; Pareti, 2012), starting from a flexible and comprehensive definition (Pareti and Prodanof, 2010, p. 3566) of AR, we created an annotation scheme which has been used to build the first large annotated resource for attribution, the Penn Attribution Relations Corpus (PARC)1, a corpus of news articles. In order to address the issue of detecting ARs in speech, we started from the theoretical and annotation framework from PARC to create a comparable resource. Section 2 explains the issues connected with extracting ARs from speech. Section 3 describes the Speech Attribution Relations Corpus (SARC, henceforth) and its annotation scheme. The analysis of the corpus is presented in Section 4. Section 5 reports an example of how prosodic cues can be crucial to identify ARs in speech. Finally, Section 6 draws on the conclusions and discusses future work.
|
6 Conclusions and future work The analysis of SARC, the first resource devel-oped to study ARs in speech, has helped to high-light a major problem of detecting attribution in a spoken corpus: the decreased reliability of the lexical cues crucial in previous approaches (com-pletely useless in at least 10% of the cases) and the consequential need to find reliable prosodic clues to integrate them. The example provided in Section 5 has showed how the integration of the acoustic cues could be useful to improve the accu-racy of attribution detection in speech. As a future project we are going to perform a large acoustic analysis of the ARs found in SARC, in order to see if some reliable prosodic cues can in fact be found and used in order to develop a software able to extract attribution from speech.
| 13 |
Multimodal
|
22_2014
| 2,014 |
Mauro Cettolo, Nicola Bertoldi, Marcello Federico
|
Adattamento al Progetto dei Modelli di Traduzione Automatica nella Traduzione Assistita
|
ITA
| 3 | 0 | 0 |
Fondazione Bruno Kessler
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Trento
|
The integration of automatic translation into axi-stite translation systems is a challenge for both aqua-demic and industrial research. Professional translators perceive the ability of self-mathic systems to adapt to their style and corrections as crucial. In this article I proposed a scheme of adaptation of automatic translation sides to a specific document based on a limited amount of text, manually corrected, equal to that produced daily by a single translator.
|
Despite the significant and continued progress, the automatic translation (TA) is not yet able to generate text suitable for publication without human intervention. On the other hand, many studies have confirmed that in the context of assisted translation the correction of translated texts automatically enables an increase in the productivity of professional translators (see paragraph 2). This application of the TA is as more effective as greater and the integration of the automatic translation system into the entire translation process, which can be achieved by specializing the system both to the specific text to be translated and to the characteristics of the specific translator and its corrections. In the translation industry, the typical scenario is that of one or more translators who work for a few days on a given translation project, or on a set of homogeneous documents. After a working day, the information contained in the newly translated texts and the corrections made by the translators can be entered into the automatic system with the aim of improving the quality of the automatic translations proposed the next day. We will call this process adaptation to the project. The adaptation to the project can be repeated daily until the end of the work, so that all the information that implicitly translators put at the disposal of the system can be best exploited. This article presents one of the results of the European MateCat project,1 in which we have developed a web-based assisted translation system that integrates a TA module that is self-adjusted to the specific project. The validation experiments we will illustrate have been conducted on four languages, from English to Italian (IT), French (FR), Spanish (ES) and German (DE), and in two fields, Information and Communication Technologies (ICT) and Legal (LGL). Ideally, the proposed adaptation methods should be evaluated by measuring the profit in terms of productivity on real translation projects. Therefore, as far as possible, we have conducted assessments on the field where professional translators have corrected the hypothesized translations by automatic, adjusted and non-adjusted systems. The adjustment was carried out on the basis of a part of the project translated during a preliminary phase, in which the same translator was asked to correct the translations provided by an unadjusted starting system. Since field assessments are extremely expensive, they cannot be performed frequently to compare all possible variants of algorithms and processes. We also conducted laboratory assessments, where the corrections of translators were simulated by reference translations. In general, in the legal field the improvements observed in the laboratory anticipated those measured in the field. On the contrary, the results in the ICT domain were controversial due to the little correspondence between the texts used for adaptation and those actually translated during the experiment.
|
A current research topic for the assisted translation industry is how to do-to automatic translation systems of the layer-city of self-adaptation. In this work I have presented a scheme of self-adaptation and the results of its validation not only in lab outlets but also on the field, with coin-wishing of professional translators, thanks to the collaboration with the industrial partner of MateCat. The experimental results confirmed the impact of our proposal, with gauge-day productivity up to 43%. However, the me-all works only if the texts used as the basis for the selection of specific data on which to perform the a-date is representative of the document you want to translate. In fact, if such a condition was not verified, as it was in our English-French/ICT experiments, the adapted models may be unable to improve the starting ones; in any case even in these critical conditions we have noticed any deterioration of performance, demonstrating the conservative behavior of our scheme.
| 10 |
Machine Translation
|
23_2014
| 2,014 |
Isabella Chiari, Tullio De Mauro
|
The New Basic Vocabulary of Italian as a linguistic resource
|
ENG
| 2 | 1 | 1 |
Sapienza Università di Roma
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Rome
|
The New Basic Vocabulary of Italian (NVdB) is a reference linguistic resource for contemporary Italian de-scribing most used and understood words of the language. The paper offers an overview of the objectives of the work, its main features and most relevant lin-guistic and computational applications.
|
1 Introduction Core dictionaries are precious resources that rep-resent the most widely known (in production and reception) lexemes of a language. Among the most significant features characterizing basic vocabulary of a language is the high textual cov-erage of a small number of lexemes (ranging from 2,000 to 5,000 top ranking words in fre-quency lists), their large polysemy, their relation-ship to the oldest lexical heritage of a language, their relevance in fist and second language learn-ing and teaching and as reference tools for lexi-cal analysis. Many recent corpus based works have been produced to provide up-to-date core dictionaries to many European languages (e.g. the Routledge frequency dictionary series). Italian language has a number reference frequency lists all of which are related to corpora and collections of texts dating 1994 or earlier (among the most relevant Bortolini et al., 1971; Juilland and Traversa, 1973; De Mauro et al., 1993; Bertinetto et al. 2005). The Basic Vocabulary of Italian (VdB, De Mauro, 1980) first appeared as an annex to Guida all’uso delle parole and has been subsequently included in all lexicographic works directed by Tullio De Mauro, with some minor changes. VdB has benefited from a combination of statistical criteria for the selection of lemmas (both grammatical and content words) mainly based on a frequency list of written Italian, LIF (Bortolini et al., 1972) and later on a frequency list of spoken Italian, LIP (De Mauro et al., 1993) – and independent evaluations further submitted to experimentation on primary school pupils. The last version of VdB was published in 2007 in an additional tome of GRADIT (De Mauro, 1999) and counts about 6,700 lemmas, organised in three vocabulary ranges. Fundamental vocabulary (FO) includes the highest frequency words that cover about 90% of all written and spoken text occurrences [appartamento ‘apartment’, commercio ‘commerce’, cosa ‘thing’, fiore ‘flower’, improvviso ‘sudden’, incontro ‘meeting’, malato ‘ill’, odiare ‘to hate’], while high usage vocabulary (AU) covers about 6% of the subsequent high frequency words [acciaio ‘steel’, concerto ‘concert’, fase ‘phase’, formica ‘ant’, inaugurazione ‘inauguration’, indovinare ‘to guess’, parroco ‘parish priest’, pettinare ‘to comb’]. On the contrary high availability (AD) vocabulary is not based on textual statistical resources but is derived from a psycholinguistic insight experimentally verified, and is to be intended in the tradition of the vocabulaire de haute disponibilité, first introduced in the Français fondamentale project (Michéa, 1953; Gougenheim, 1964). VdB thus integrates high frequency vocabulary ranges with the socalled high availability vocabulary (haute disponibilité) and thus provides a full picture of not only written and spoken usages, but also purely mental usages of word (commonly regarding words having a specific relationship with the concreteness of ordinary life) [abbaiare to ‘bark’, ago ‘needle’, forchetta ‘fork’, mancino ‘left-handed’, pala ‘shovel’, pescatore ‘fisherman’]. From the first edition of VdB many things have changed in Italian society and language: Italian language was then used only by 50% of the population. Today Italian is used by 95% of the population. Many things have changed in the conditions of use of the language for the speakers and the relationship between Italian language and dialects have been deeply transformed. The renovated version of VdB, NVdB (Chiari and De Mauro, in press), will be presented and previewed in this paper. NVdB is a linguistic resource designed to meet three different purposes: a linguistic one, to be intended in both a theoretical and a descriptive sense, an educationallinguistic one and a regulative one, for the development of guidelines in public communication. The educational objective is focused on providing a resource to develop tools for language teaching and learning, both for first and second language learners. The descriptive lexicological objective is providing a lexical resource that can be used as a reference in evaluating behaviour of lexemes belonging to different text typologies, taking into account the behaviour of different lexemes both from an empirical-corpus based approach and an experimental (intuition based) approach and enable the description of linguistic changes that affected most commonly known words in Italian from the Fifties up to today. The descriptive objective is tightly connected to the possible computational applications of the resource in tools able to process general language and take into account its peculiar behaviour. The regulative objective regards the use of VdB as a reference for the editing of administrative texts, and in general, for easy reading texts.
|
4 Conclusion and future developments The NVdB of Italian is be distributed as a fre-quency dictionary of lemmatized lexemes and multiword, with data on coverage, frequency, dispersion, usage labels, grammatical qualifica-tions in all subcorpora. A linguistic analysis and comparison with previous data is also provided with full methodological documentation. The core dictionary and data are also distribut-ed electronically in various formats in order to be used as a reference tool for different applications. Future work will be to integrate data from the core dictionary with new lexicographic entries (glosses, examples, collocations) in order to pro-vide a tool useful both for first and second lan-guage learners and for further computational ap-plications.
| 7 |
Lexical and Semantic Resources and Analysis
|
24_2014
| 2,014 |
Francesca Chiusaroli
|
Sintassi e semantica dell'hashtag: studio preliminare di una forma di Scritture Brevi
|
ITA
| 1 | 1 | 1 |
Università di Macerata
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Macerata
|
The contribution presents a line-guistic analysis of the hashtag category in Twit-ter, in particular with regard to the Italian-the forms, in order to observe the morphotatic characteristics in the body of the text and also the semantic potential for the possible interpretation of the shapes in the rate-nomic key. The research path will be articulated within the theoretical horizon defined by the concept of Short Scripture as it is found ela-borate in Chiusaroli and Zanzotto 2012a and 2012b and now at www.scritturebrevi.it
|
In the definition of the so-called Egergo of Twit-terÓ, the hashtag category is placed right, for the typical difficulties of the imme-diata and practical readability of the tweet posted in the form preceded by the cancellation. Particularly the presence of the hashtag, along with the need of the account (user address preceded by the chiocciola), recognize the artifact structure of the text of Twitter compared to the ordinary and conventional writing, since the phrase string is concretely altered by such figures traditionally not contemplated in the ortographic rules of the standard language. Cryptocurrency is confirmed by the easy experiment of transferring a tweet containing hashtags and accounts out of the Twitter environment, where the failure to integrate the forms is immediately perceived and the process of reading and understanding is also substantially purchased. This difficulty, encountered by the neofite of the medium, appears in fact overcomeable with practice, while some special features of the post-sono hashtag cause problems with regard to the formal, or automatic decades of texts. This contribution aims to provide a description of the language properties of the hashtag, which is the most peculiar te-stual element of Twitter (Chiusaroli, 2014), with part-lare regarding the expressions in Italian. The con-sideration of the grammar values and semantic functions enables to delineate the rules of reading of the text, as well as to assess the relevance and necessity, for the analysis, of an interpre-taction in taxonomic key useful for the sistematic classification of this recent form of today's web language, im-porting today among the phenomena of the writing of the network (Pistolesi, 2014; Antonelli, 2007; Maraschio and De Martino, 2010; Tavosanis, 2011). The investigation path sees the hashtag come back to the definition of Short Writings as it is elaborated in Chiusaroli and Zanzotto 2012a and 2012b and now in www.scritturebrevi.it: It is the label Short Writings is proposed as a conceptual and metallingual category for the classification of graphic forms such as abbreviations, acronyms, signs, icons, indices and symbols, figurative elements, text expressions and visual codes for which it results in directing the principle of Ôbrevity’ connected to the criterion of the Oeconomy’. In particular, all the graphic manifestations that, in the syntagmatic dimension, subordinate to the principle of linearity of the meaning, alter the conventional morphotatic rules of the written language, and intervene in the construction of the message in the terms of Oriduction, content, synthesis’ inducted by the supporters and contexts. The category is applied in synchronic and linguistic diacronic, in standard and non-standard systems, in general and specialized areas.Ó The analysis will also take place of the Twitter experience maturated with accounts @FChiusaroli and hashtag #scritturebrevi (from December 26, 2012) and other related hashtags (now elaborated and/or discussed at www.scritturebrevi.it).
|
The need to consider the elements with hashtags for their value both formal and semantic confirms itself indispensable for a proper assessment of language products (Cann, 1993, and, for the basis, Fillmore, 1976; Lyons, 1977; Church and McConnell Ginet, 1990), in particular, but not only, to be able to judge the real and concrete impact of the phenomenon of the writing of the network on the forms and uses, even in the wider perspective of the diacronic change (Simone, 1993). Where the hashtag is an important isolated element and as such a ability to gather content and ideas, it appears incomplete any analysis that does not take into account the belonging of the hashtag to multiple categories of the language, from the common name simple or composite, to its own, simple or composite, to the symmetic and phrasal connection, with natural consequences of morphosynthetic treatment (Grossmann, 2004; Doleschal and Thornton, 2000; Recanati, 2011). An appropriate analysis can also not be deprived of a semantic classification in the hierarchical and taxonomic sense of voices (Cardona, 1980, 1985a, 1985b), i.e. taking into account the degrees of the relationships between the elements, the relationships of synonymous or of hyperonymous and hyponymous (Basile, 2005 and Jezek, 2005), and also the only formal, homographic and homonymous relationships (Pazienza, 1999; Nakagawa and Mori, 2003; Pazienza and Pennacchiotti and Zanzotto, 2005), and finally the references in terms of correspondence in other languages (Smadja, McKeown and Hatzivassiloglou, 1996), in particular the English, for its role of network driver (Crystal, 2003). If it is true that the network and knowledge in the network are formed according to procedures not more linear or monodimensional, but" with progress in depth and by layers (Eco, 2007), it seems indispensable to insert in the horizon of analysis, in addition to the formal, numerical and quantitative element, also the assessment of the semantic and prototipic structure through the reconstruction of the minimum elements or ÈprimiÓ of knowledge, a method well-known in the history of linguistics with the term de reductio (Chiusaroli, 1998 and 2001), which for other things puts itself at the origin of the algorithm of the search engine (Eco, 1993). The web structure and the internal organization of the CMC enable us to use the Twitter hashtag as an emblematic case study to test the effectiveness of a method that combines the consideration of the functional power of the graphic string with the relevance of the semantic content plan: a intersection of different factors that must be mutually dependent for the correct verification of the data; an integrated theory of (web-)knowledge based on writing (Ong, 2002).
| 6 |
Sentiment, Emotion, Irony, Hate
|
25_2014
| 2,014 |
Morena Danieli, Giuseppe Riccardi, Firoj Alam
|
Annotation of Complex Emotions in Real-Life Dialogues: The Case of Empathy
|
ENG
| 3 | 1 | 1 |
Università di Trento
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Trento
|
In this paper we discuss the problem of an-notating emotions in real- life spoken conversations by investigat- ing the special case of empathy. We pro- pose an annotation model based on the situated theories of emotions. The anno- tation scheme is directed to ob-serve the natural unfolding of empathy during the conversations. The key component of the protocol is the identification of the anno- tation unit based both on linguistic and paralinguistic cues. In the last part of the paper we evaluate the reliability of the annotation model.
|
The work we present is part of a research project aiming to provide scientific evidence for the sit- uated nature of emotional processes. In particular we investigate the case of complex social emo- tions, like empathy, by seeing them as relational events that are recognized by observers on the basis of their unfolding in human interactions. The ultimate goals of our research project are a) understanding the multidimensional signals of empathy in human conversations, and b) generat- ing a computational model of basic and complex emotions. A fundamental requirement for build- ing such computational systems is the reliability of the annotation model adopted for coding real life conversations. Therefore, in this paper, we will focus on the annotation scheme that we are using in our project by illustrating the case of empathy annotation. Empathy is often defined by metaphors that evoke the emotional or intellectual ability to identify another person’s emotional states, and/or to understand states of mind of the others. The word “empathy” was introduced in the psychological literature by Titchener in 1909 for translating the German term “Einfühlung”. Nowadays it is a common held opinion that empathy encompasses several human interaction abilities. The concept of empathy has been deeply investigated by cognitive scientists and neuroscientists, who proposed the hypothesis according to which empathy underpins the social competence of reconstructing the psychic processes of another person on the basis of the possible identification with his/her internal world and actions (Sperber & Wilson, 2002; Gallese, 2003). Despite the wide use of the notion of empathy in the psychological research, the concept is still vague and difficult to measure. Among psychologists there is little consensus about which signals subjects rely on for recognizing and echoing empathic responses. Also the uses of the concept by the computational attempts to repro-duce empathic behavior in virtual agents seem to be suffering due to the lack of operational definitions. Since the goal of our research is addressing the problem of automatic recognition of emotions in real life situations, we need an operational model of complex emotions, including empathy, focused on the unfolding of the emotional events. Our contribution to the design of such a model assumes that processing the discriminative characteristics of acoustic, linguistic, and psycholinguistic levels of the signals can support the automatic recognition of empathy in situated human conversations. The paper is organized as follows: in the next Section we introduce the situated model of emotions underlying our approach, and its possible impact on emotion annotation tasks. In Section 3 we describe our annotation model, its empirical bases, and reliability evaluation. Finally, we discuss the results of lexical features analysis and ranking
|
In this paper we propose a protocol for annotat- ing complex social emotions in real-life conver- sations by illustrating the special case of empa- thy. The definition of our annotation scheme is empirically-driven and compatible with the situ- ated models of emotions. The difficult goal of annotating the unfolding of the emotional pro- cesses in conversations has been approached by capturing the transitions between neutral and emotionally connoted speech events as those transitions manifest themselves in the melodic variations of the speech signals.
| 6 |
Sentiment, Emotion, Irony, Hate
|
26_2014
| 2,014 |
Irene De Felice, Roberto Bartolini, Irene Russo, Valeria Quochi, Monica Monachini
|
Evaluating ImagAct-WordNet mapping for English and Italian through videos
|
ENG
| 5 | 4 | 4 |
CNR-ILC
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
In this paper we present the results of the evaluation of an auto-matic mapping between two lexical re-sources, WordNet/ItalWordNet and Ima-gAct, a conceptual ontology of action types instantiated by video scenes. Results are compared with those obtained from a previous experiment performed only on Italian data. Differences between the two evaluation strategies, as well as between the quality of the mappings for the two languages considered in this paper, are dis-cussed.
|
In lexicography, the meaning of words is repre-sented through words: definitions in dictionaries try to make clear the denotation of lemmas, report-ing examples of linguistic usages that are funda-mental especially for function words like preposi-tions. Corpus linguistics derives definitions from a huge amount of data. This operation improves words meaning induction and refinements, but still supports the view that words can be defined by words. In the last 20 years dictionaries and lexicographic resources such as WordNet have been enriched with multimodal content (e.g. illustrations, pic-tures, animations, videos, audio files). Visual representations of denotative words like concrete nouns are effective: see for example the ImageNet project, that enriches WordNets glosses with pic-tures taken from the web. Conveying the meaning of action verbs with static representations is not possible; for such cases the use of animations and videos has been proposed (Lew 2010). Short videos depicting basic actions can support the users need (especially in second language acquisition) to understand the range of applicability of verbs. In this paper we describe the multimodal enrichment of Ital- WordNet and WordNet 3.0 action verbs entries by means of an automatic mapping with ImagAct (www.imagact.it), a conceptual ontology of action types instantiated by video scenes (Moneglia et al. 2012). Through the connection between synsets and videos we want to illustrate the meaning described by glosses, specifying when the video represents a more specific or a more generic action with respect to the one described by the gloss. We evaluate the mapping watching videos and then finding out which, among the synsets related to the video, is the best to describe the action performed.
|
Mutual enrichments of lexical resources is convenient, especially when different kinds of information are available. In this paper we describe the mapping between ImagAct videos representing action verbs’ meanings and Word- Net/ItalWordNet, in order to enrich the glosses multimodally. Two types of evaluation have been performed, one based on a gold standard that establishes correspondences between ImagActs basic action types and ItalWordNets synsets (Bartolini at al. 2014) and the other one based on the suitability of a synsets gloss to describe the action watched in the videos. The second type of evaluation suggests that for Italian the automatic mapping is effective in projecting the videos on Ital- WordNet’s glosses. For what regards the mapping for English, as future work we plan to change the settings, in order to test if the number of synonyms available in WordNet has a negative impact on the quality of the mapping.
| 7 |
Lexical and Semantic Resources and Analysis
|
27_2014
| 2,014 |
Irene De Felice, Margherita Donati, Giovanna Marotta
|
CLaSSES: a new digital resource for Latin epigraphy
|
ENG
| 3 | 3 | 1 |
Università di Pisa, CNR-ILC
| 2 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
CLaSSES (Corpus for Latin Socio- linguistic Studies on Epigraphic textS) is an annotated corpus for quantitative and qualita- tive sociolinguistic analyses on Latin inscrip- tions. It allows specific researches on phono- ological and morphophonological phenomena of non-standard Latin forms with crucial ref- erence to the typology of the text, its origin and chronological collocation. This paper presents the first macrosection of CLaSSES, focused on the inscriptions from the archaic- early period.
|
Available digital resources for Latin epigraphy include some important databases. The Clauss- Slaby database (http://www.manfredclauss.de/gb/index.html) records almost all Latin inscriptions (by now 696.313 sets of data for 463.566 inscriptions from over 2.480 publications), including also some pictures. It can be searched by records, province, place and specific terms, thus provid- ing users with quantitative information. The Epi- graphic Database Roma EDR (http://www.edr- edr.it/English/index_en.php) is part of the international federation of Epigraphic Databases called Elec- tronic Archive of Greek and Latin Epigraphy (EAGLE). It is possible to look through EDR both as a single database or together with its partner databases accessing EAGLE’s portal (www.eagle-eagle.it). 1 Although they collect a large amount of data, these resources cannot provide linguists with rich qualitative and quantitative linguistic information focused on specific phenomena. The need for a different kind of information automatically ex- tracted from epigraphic texts is particularly expressing when dealing with sociolinguistic is- sues. There is a current debate on whether inscriptions can provide direct evidence on actual linguistic variations occurring in Latin society or they cannot. As Herman (1985) points out, the debate on the linguistic representativity of inscriptions alternates between totally skeptical and too optimistic approaches. Following Herman (1970, 1978a, 1978b, 1982, 1985, 1987, 1990, 2000), we believe that epigraphic texts can be regarded as a fundamental source for studying variation phenomena, provided that one adopts a critical approach. Therefore, we cannot entirely agree with the skeptical view adopted by Adams (2013: 33-34), who denies the role of inscriptions as a source for sociolinguistic variation in the absence of evidence also from metalinguistic comments by grammarians and literary authors. That said, the current state-of-the-art digital resources for Latin epigraphic texts does not allow researchers to evaluate the relevance of inscriptions for a sociolinguistic study that would like to rely on direct evidence. Furthermore, it is worth noting that within the huge amount of epigraphic texts available for the Latin language not every inscription is equally significant for linguistic studies: e.g., many inscriptions are very short or fragmentary, others are manipulated or intentionally archaising. Obviously, a (socio) linguistic approach to epigraphic texts should take into account only linguistically significant texts.
|
CLaSSES is an epigraphic Latin corpus for quan- titative and qualitative sociolinguistic analyses on Latin inscriptions, that can be useful for both historical linguists and philologists. It is annotat- ed with linguistic and metalinguistic features which allow specific queries on different levels of non-standard Latin forms. We have here presented the first macrosection of CLaSSES, containing inscriptions from the archaic-early period. In the next future we will collect comparable sub-corpora for the Classical and the Imperial period. Moreover, data will be organized in a database available on the web.
| 5 |
Latin Resources
|
28_2014
| 2,014 |
Jose' Guilherme Camargo De Souza, Marco Turchi, Matteo Negri, Antonios Anastasopoulos
|
Online and Multitask learning for Machine Translation Quality Estimation in Real-world scenarios
|
ENG
| 4 | 0 | 0 |
Fondazione Bruno Kessler, Università di Trento, University of Notre Dame
| 3 | 1 | 0 | 1 |
Antonios Anastasopoulos
| 0 |
0
|
Indiana (USA), Italy
|
Notre Dame, Trento
|
We investigate the application of different supervised learning approaches to machine translation quality estimation in realistic conditions where training data are not available or are heterogeneous with respect to the test data. Our experiments are carried out with two techniques: on-line and multitask learning. The former is capable to learn and self-adapt to user feedback, and is suitable for situations in which training data is not available. The latter is capable to learn from data com-ing from multiple domains, which might considerably differ from the actual testing domain. Two focused experiments in such challenging conditions indicate the good potential of the two approaches.
|
Quality Estimation (QE) for Machine Translation (MT) is the task of estimating the quality of a translated sentence at run-time and without access to reference translations (Specia et al., 2009; Soricut and Echihabi, 2010; Bach et al., 2011; Specia, 2011; Mehdad et al., 2012; C. de Souza et al., 2013; C. de Souza et al., 2014a). As a quality indicator, in a typical QE setting, automatic systems have to predict either the time or the number of editing operations (e.g. in terms of HTER1) required to a human to transform the translation into a syntactically/semantically correct sentence. In recent years, QE gained increasing interest in the MT community as a possible way to: i) decide whether a given translation is good enough for publishing as is, ii) inform readers of the target language only whether or not they can rely on a translation, iii) filter out sentences that are not good enough for post-editing by professional translators, or iv) select the best translation among options from multiple MT and/or translation memory systems. So far, despite its many possible applications, QE research has been mainly conducted in controlled lab testing scenarios that disregard some of the possible challenges posed by real working conditions. Indeed, the large body of research resulting from three editions of the shared QE task organized within the yearlyWorkshop on Machine Translation (WMT – (Callison-Burch et al., 2012; Bojar et al., 2013; Bojar et al., 2014)) has relied on simplistic assumptions that do not always hold in real life. These assumptions include the idea that the data available to train QE models is: i) large (WMT systems are usually trained over datasets of 800/1000 instances) and ii) representative (WMT training and test sets are always drawn from the same domain and are uniformly distributed).In order to investigate the difficulties of training a QE model in realistic scenarios where such conditions might not hold, in this paper we approach the task in situations where: i) training data is not available at all (x2), and ii) training instances come from different domains (x3). In these two situations, particularly challenging from the machine learning perspective, we investigate the potential of online and multitask learning methods (the former for dealing with the lack of data, and the latter to cope with data heterogeneity), comparing them with the batch methods currently used.
|
We investigated the problem of training reliable QE models in particularly challenging conditions from the learning perspective. Two focused experiments have been carried out by applying: i) online learning to cope with the lack of training data, and ii) multitask learning to cope with heterogeneous training data. The positive results of our experiments suggest that the two paradigms should be further explored (and possibly combined) to overcome the limitations of current methods and make QE applicable in real-world scenarios.
| 10 |
Machine Translation
|
29_2014
| 2,014 |
Rodolfo Delmonte
|
A Computational Approach to Poetic Structure, Rhythm and Rhyme
|
ENG
| 1 | 0 | 0 |
Università Ca' Foscari Venezia
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Venice
|
In this paper we present SPARSAR, a system for the automatic analysis of English and Italian poetry. The system can work on any type of poem and produces a set of parameters that are then used to compare poems with one another, of the same author or of different authors. In this paper, we will concentrate on the second module, which is a rule-based system to represent and analyze poetic devices. Evaluation of the system on the basis of a manually created dataset - including poets from Shakespeare's time down to T.S.Eliot and Sylvia Plath - has shown its high precision and accuracy approximating 90%.
|
In this paper we present SPARSAR1, a system for the automatic analysis of English and Italian poetry. The system can work on any type of poem and produces a set of parameters that are then used to compare poems with one another, of the same author or of different authors. The output can be visualized as a set of coloured boxes of different length and width and allows a direct comparison between poems and poets. In addition, parameters produced can be used to evaluate best similar candidate poems by different authors by means of Pearson's correlation coefficient. The system uses a modified version of VENSES, a semantically oriented NLP pipeline (Delmonte et al., 2005). It is accompanied by a module that works at sentence level and produces a whole set of analysis both at quantitative, syntactic and semantic level. The second module is a rulebased system that converts each poem into phonetic characters, it divides words into stressed/unstressed syllables and computes rhyming schemes at line and stanza level. To this end it uses grapheme to phoneme translations made available by different sources, amounting to some 500K entries, and include CMU dictionary, MRC Psycholinguistic Database, Celex Database, plus our own database made of some 20,000 entries. Out of vocabulary words are computed by means of a prosodic parser we implemented in a previous project (Bacalu & Delmonte, 1999a,b). The system has no limitation on type of poetic and rhetoric devices, however it is dependent on language: Italian line verse requires a certain number of beats and metric accents which are different from the ones contained in an English iambic pentameter. Rules implemented can demote or promote word-stress on a certain syllable depending on selected language, linelevel syllable length and contextual information. This includes knowledge about a word being part of a dependency structure either as dependent or as head. A peculiar feature of the system is the use of prosodic measures of syllable durations in msec, taken from a database created in a previous project(Bacalu & Delmonte, 1999a,b). We produce a theoretic prosodic measure for each line and stanza using mean durational values associated to stressed/ unstressed syllables. We call this index, "prosodic-phonetic density index", because it contains count of phones plus count of theoretic durations: the index is intended to characterize the real speakable and audible consistency of each line of the poem. A statistics is issued at different levels to evaluate distributional properties in terms of standard deviations, skewness and kurtosis. The final output of the system is a parameterized version of the poem which is then read aloud by a TTS system: parameters are generated taking into account all previous analysis including sentiment or affective analysis and discourse structure, with the aim to produce an expressive reading. This paper extends previous conference and demo work (SLATE, Essem, EACL), and concentrates on the second module which focuses on poetic rhythm. The paper is organized as follows: the following section 2 is devoted to present the main features of the prosodicphonetic system with some example; we then present a conclusion and future work.
|
We have done a manual evaluation by analysing a randomly chosen sample of 50 poems out of the 500 analysed by the system. The evaluation has been made by a secondary school teacher of English literature, expert in poetry. We asked the teacher to verify the following four levels of analysis: 1. phonetic translation 2. syllable division; 3. feet grouping; 4. metrical rhyming structure. Results show a percentage of error which is around 5% as a whole, in the four different levels of analysis, thus subdivided: 1.8 for parameter 1; 2.1 for parameter 2; 0.3 for parameter 3; 0.7 for parameter 4.
| 6 |
Sentiment, Emotion, Irony, Hate
|
30_2014
| 2,014 |
Rodolfo Delmonte
|
A Reevaluation of Dependency Structure Evaluation
|
ENG
| 1 | 0 | 0 |
Università Ca' Foscari Venezia
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Venice
|
In this paper we will develop the argument indirectly raised by the organizer of 2014 Dependency Parsing for Information Extraction task when they classify 19 relations out of 45 as those semantically relevant for the evaluation, and exclude the others which confirms our stance which considers the new paradigm of Dependency parsing evaluation favoured in comparison to the previous parsing scheme based mainly on constituent or phrase structure evaluation. We will also speak in favour of rule-based dependency parsing and against statistically based dependency parsers for reasons related to the role played by the SUBJect relation in Italian.
|
In this paper I will question the currently widely spread assumption that Dependency Structures (hence DS) are the most convenient syntactic representation, when compared to phrase or constituent structure. I will also claim that evaluation metrics applied to DS are somehow "boasting" its performance with respect to phrase structure (hence PS) representation, without a real advantage, or at least it has not yet been proven there is one. In fact, one first verification has been achieved by this year Evalita Campaign which has introduced a new way of evaluating Dependency Structures, called DS for Information Extraction - and we will comment on that below1. In the paper I will also argue that some features of current statistical dependency parsers speak against the use of such an approach to the parsing of languages like Italian which have a high percentage of non-canonical structures (hence NC). In particular I will focus on problems raised by the way in which SUBJect arguments are encoded. State of the art systems are using more and more dependency representations which have lately shown great resiliency, robustness, scalability and great adaptability for semantic enrichment and processing. However, by far the majority of systems available off the shelf don’t support a fully semantically consistent representation and lack Empty or Null Elements (see Cai et al. 2001)2. O.Rambow (2010) in his opinion paper on the relations between dependency and phrase structure representation has omitted to mention the most important feature that differentiates them. PS evaluation is done on the basis of Brackets, where each bracket contains at least one HEAD, but it may contain other Heads nested inside. Of course, it may also contain a certain number of minor categories which however don’t count for evaluation purposes. On the contrary, DS evaluation is done on the basis of head-dependent relations intervening between a pair of TOKENs. So on the one side, F-measure evaluates number of brackets which coincide with number of Heads; on the other side it evaluates number of TOKENS. Now, the difference in performance is clearly shown by percent accuracy obtained with PS evaluation which for Italian was contained in a range between 70% to 75% in Evalita 2007, and between 75% and 80% in Evalita 2009 – I don’t take into account 2011 results which are referred to only one participant. DS evaluation reached peaks of 95% for UAS and in between 84% and 91% for LAS evaluation. Since data were the same for the two campaigns, one wonders what makes one representation more successful than the other. Typically, constituent parsing is evaluated on the basis of constituents, which are made up of a head and an internal sequence of minor constituents dependent on the head. What is really important in the evaluation is the head of each constituent and the way in which PS are organized, and this corresponds to bracketing. On the contrary, DS are organized on the basis of a “word level grammar”, so that each TOKEN constributes to the overall evaluation, including punctuation (not always). Since minor categories are by far the great majority of the tokens making up a sentence – in Western languages, but no so in Chinese, for instance (see Yang & Xue, 2010)– the evaluation is basically made on the ability of the parser to connect minor categories to their heads. What speaks in favour of adopting DS is the clear advantage gained in the much richer number of labeled relations which intervene at word level, when compared to the number of constituent labels used to annotate PS relations3. It is worth while noting that DS is not only a much richer representation than PS, but it encompasses different levels of linguistic knowledge. For instance, punctuation may be used to indicate appositions, parentheticals, coordinated sets, elliptical material, subdivision of complex sentences into main and subordinate clause. The same applies to discourse markers which may be the ROOT of a sentence. These have all to be taken into account when computing DS but not with PS parsing.
|
In this paper I tried to highlight critical issues on the current way of evaluating DS which indirectly "boasts" the performance of the parsers when compared to phrase structure evaluation. I assume this is due to the inherent shortcoming of DS evaluation not considering semantically relevant grammatical relations as being more important than minor categories. Statistical dependency parsers may have more problems in encoding features of Italian Subject because of its multiple free representations. For this reasons, I argued in favour of rule-based dependency parsers and I presented in particular, one example from TULETUT, a deep parser of Italian.
| 4 |
Syntax and Dependency Treebanks
|
31_2014
| 2,014 |
Rodolfo Delmonte
|
Analisi Linguistica e Stilostatistica - Uno Studio Predittivo sul Campo
|
ITA
| 1 | 0 | 0 |
Università Ca' Foscari Venezia
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Venice
|
In this work we present a field study to define a precise evaluation scheme for text styling that has been used to establish a graduation of various documents based on their persuasion skills and readability. The study concerns the documents of the political programs published on a public forum by candidates to the University of Ca’ Foscari – Venice. The documents were analyzed by our system and a graduation was created on the basis of scores associated with eleven parameters. After the vote, we created the graduation and we discovered that the system had predicted the name of the real winner in advance. The results were published in a local newspaper.
|
The analysis begins with the idea that the style of a programming document is composed of quantitative elements at the word level, of elements derived from the frequent use of certain synthetic structures and from unquisitely semantic and pragmatic characteristics such as the use of words and concepts that inspire positivity. I conducted the analysis starting from the texts available on the web or received from the candidates, using a series of parameters that I created for the analysis of the political speech in the Italian newspapers during the last and last government crisis. The results are published in some work nationally and internationally that I have listed in a brief bibliography. The analysis uses classical quantitative data such as the types/tokens ratio and then introduces information derived from the GETARUNS system that makes a complete parsing of texts from a syntax, semantic and pragmatic point of view. The data listed in the tables are derived from the system output files. The system produces a file for each phrase, a comprehensive file for the semantic analysis of the text and a file with the verticalized version of the text analyzed where each word is accompanied by a synthetic-semanticapragmatic classification. The system consists of a parser to transition networks increased by subcategory information, which first builds the chunks and then cascades the structures to higher complex constituents up to that of phrase. This representation is translated to another view that works on islands, starting from each verbal complex, corresponding to the verbal constituent. The parser to islands identifies the predicated-argumental structure, including the additions on the basis of the information contained in a subcategory for Italian built in previous projects, containing approximately 50,000 verbal and objective entries at different levels of depth. It is also used a list of selection preferences for verbs, names and acts derived from Italian treebanks available and containing approximately 30,000 income. In Table 1, we report the data in absolute numbers.
|
If you want to do a comprehensive graduation, you can consider that each parameter may have a positive or negative value. If it is positive, the person with the greater amount will be attributed as a reward the value 5 and the others to scale a value below one point, up to the value 1. In the event that the parameter has a negative value, the candidate with the greater amount will receive the score below 1 and the others to scale a value above one point to 5. The overall graduation will then be shaped making the sum of all the individual points obtained. The attribution of polarity to each parameter follows linguistic and stylistic criteria, and is as follows: 1. NullSubject - positive: The major number of zero subjects indicates the will to create a very coherent text and not to overload the reference to the same entity with repeated or core reference forms. The two. Subjective Props – negative: The majority of proposals that express a subjective content indicates the subject’s tendency to expose his own ideas in a non-objective way. and 3. Negative Props - negative: The major use of negative propositions, i.e. with the use of denial or negative warnings, is a stylistic trait that is not propositive but tends to contradict what said or done by others. and 4. Non-factive Props - negative: The use of non-factive propositions indicates the stylistic tendency to expose their own ideas using unrealistic verbal times and ways - conjunctive, conditional, future and indefinite times. 5 of 5. Props / Sents - negative: The ratio that indicates the number of propositions per phrase is considered negative to mean that the higher the greater is the complexity of the style. 6 of 6. Negative Ws - negative: The number of negative words used in proportion to the total number of words has a negative value. 7 of 7. Positive Ws - positive: The number of positive words used in proportion to the total number of words has a positive value. 8 of 8. Passive Diath - negative: The number of passive forms used is considered negative as dark the agent of the action described. 9 of 9. Token / Sents - negative: The number of tokens in relation to the expressed phrases is treated as a negative factor again in relation to the problem of the induced complexity. 10 of 10. Vr - Rw - negative: This measure considers vocabulary wealth based on the so-called RareWords, or the total number of Hapax/Dis/Tris Legomena in the Rank List. The more unique or less frequent words the more complex the style is. 11 of 11. Vr - Tt - negative: As above, this time considering the total number of Types. The award of the score on the basis of the criteria indicated defines the following final graduation: Bugliesi 47 LiCalzi 36 Brugiavini 28 Cardinaletti 27 Bertinetti 27 Table 2. Final graduation based on the 11 parameters (see Tab. 2.1 in Appendix 2) Wishing to include also the points relating to the use of PERSONALE and its names we will have this overall result: Bulliesi 53 LiCalzi 44 Brugiavini 37 Cardinaletti 31 Bertinetti 30 *Table 3. Final graduation based on the 13 parameters (see Tab. 3.1 in Appendix 2) Using the parameters as judgment elements to classify the style of the candidates and assigning a word assessment, the two subsequent judgments are obtained. 1 of 1. Bugliesi has won perchŽ has used a more coherent style, with a simpler vocabulary, of simple and direct synthetic structures, expressing the contents in a concrete and fatal way, speaking at all levels of stakeholders, teachers and non- teachers. He also used fewer negative expressions and phrases and more positive expressions. The data also tells us that the Bugliesi program is in strong correlation with that of LiCalzi but not with that of the other candidates. Cardinaletti has written a program that uses a uncohered style, with a somewhat elaborate vocabulary, with quite more complex synthetic structures, expressing the contents in a much less concrete and much less fatal way, speaking to all levels of stakeholders, teachers and non- teachers. He also used few negative expressions and phrases and relatively few positive expressions. Finally, the Cardinaletti program is in good correlation with the Brugiavini program.
| 11 |
Text Simplification
|
32_2014
| 2,014 |
Marina Ermolaeva
|
An adaptable morphological parser for agglutinative languages
|
ENG
| 1 | 1 | 1 |
Moscow State University
| 1 | 1 | 1 | 1 |
Marina Ermolaeva
| 0 |
0
|
Russia
|
Moscow
|
The paper reports the state of the ongoing work on creating an adaptable morphological parser for various agglutinative languages. A hybrid approach involving methods typically used for non-agglutinative languages is proposed. We explain the design of a working prototype for inflectional nominal morphology and demonstrate its work with an implementation for Turkish language. An additional experiment of adapting the parser to Buryat (Mongolic family) is discussed.
|
The most obvious way to perform morphological parsing is to make a list of all possible morphological variants of each word. This method has been successfully used for nonagglutinative languages, e.g. (Segalovich 2003) for Russian, Polish and English. Agglutinative languages pose a much more complex task, since the number of possible forms of a single word is theoretically infinite (Jurafsky and Martin 2000). Parsing languages like Turkish often involves designing complicated finite-state machines where each transition corresponds to a single affix (Hankamer 1986; Eryiğit and Adalı 2004; Çöltekin 2010; Sak et al. 2009; Sahin et al. 2013). While these systems can perform extremely well, a considerable redesigning of the whole system is required in order to implement a new language or to take care of a few more affixes. The proposed approach combines both methods mentioned above. A simple finite-state machine allows to split up the set of possible affixes, producing a finite and relatively small set of sequences that can be easily stored in a dictionary. Most systems created for parsing agglutinative languages, starting with (Hankamer 1986) and (Oflazer 1994), process words from left to right: first stem candidates are found in a lexicon, then the remaining part is analyzed. The system presented in this paper applies the right-to-left method (cf. (Eryiğit and Adalı 2004)): affixes are found in the first place. It can ultimately work without a lexicon, in which case the remaining part of the word is assumed to be the stem; to improve precision of parsing, it is possible to compare it to stems contained in a lexicon. A major advantage of right-to-left parsing is the ability to process words with unknown stems without additional computations. Multi-language systems (Akın and Akın 2007; Arkhangelskiy 2012) are a relatively new tendency. With the hybrid approach mentioned above, the proposed system fits within this trend. As the research is still in progress, the working prototype of the parser (written in Python language) is currently restricted to nominal inflectional morphology. Within this scope, it has been implemented for Turkish; an additional experiment with Buryat language is discussed in the section 5.
|
At the moment, the top-importance task is lifting the temporary limitations of the parser by implementing other parts of speech (finite and non-finite verb forms, pronouns, postpositions etc.) and derivational suffixes. Although the slot system described in 3.1 has been sufficient for both Turkish and Buryat, other agglutinative languages may require more flexibility. This can be achieved either by adding more slots (thus making the slot system nearly universal) or by providing a way to derive the slot system automatically, from plain text or a corpus of tagged texts; the latter solution would also considerably reduce the amount of work that has to be done manually. Another direction of future work involves integrating the parser into a more complex system. DIRETRA, an engine for Turkish-to-English direct translation, is being developed on the base of the parser. The primary goal is to provide a word-for-word translation of a given text, reflecting the morphological phenomena of the source language as precisely as possible. The gloss lines output by the parser are processed by the other modules of the system and ultimately transformed into text representations in the target language:Though the system is being designed for Turkish, the next step planned is to implement other Turkic languages as well.
| 7 |
Lexical and Semantic Resources and Analysis
|
33_2014
| 2,014 |
Lorenzo Ferrone, Fabio Massimo Zanzotto
|
Distributed Smoothed Tree Kernel
|
ENG
| 2 | 0 | 0 |
Università di Roma Tor Vergata
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Rome
|
In this paper we explore the possibility to merge the world of Compositional Distributional Semantic Models (CDSM) with Tree Kernels (TK). In particular, we will introduce a specific tree kernel (smoothed tree ker-nel, or STK) and then show that is possibile to approximate such kernel with the dot product of two vectors obtained compositionally from the sen-tences, creating in such a way a new CDSM.
|
Compositional distributional semantics is a flourishing research area that leverages dis-tributional semantics (see Baroni and Lenci (2010)) to produce meaning of simple phrases and full sentences (hereafter called text frag-ments). The aim is to scale up the success of word-level relatedness detection to longer fragments of text. Determining similarity or relatedness among sentences is useful for many applications, such as multi-document summar-ization, recognizing textual entailment (Dagan et al., 2013), and semantic textual similarity detection (Agirre et al., 2013; Jurgens et al., 2014). Compositional distributional semantics models (CDSMs) are functions mapping text fragments to vectors (or higher-order tensors). Functions for simple phrases directly map distributional vectors of words to distributional vectors for the phrases (Mitchell and Lapata, 2008; Baroni and Zamparelli, 2010; Zanzotto et al., 2010). Functions for full sentences are generally de ned as recursive functions over the ones for phrases (Socher et al., 2011). Distributional vectors for text fragments are then used as inner layers in neural networks, or to compute similarity among text fragments via dot product. CDSMs generally exploit structured representations tx of text fragments x to derive their meaning f(tx), but the structural information, although extremely important, is obfuscated in the nal vectors. Structure and meaning can interact in unexpected ways when computing cosine similarity (or dot product) between vectors of two text fragments, as shown for full additive models in (Ferrone and Zanzotto, 2013). Smoothed tree kernels (STK) (Croce et al., 2011) instead realize a clearer interaction between structural information and distributional meaning. STKs are speci c realizations of convolution kernels (Haussler, 1999) where the similarity function is recursively (and, thus, compositionally) computed. Distributional vectors are used to represent word meaning in computing the similarity among nodes. STKs, however, are not considered part of the CDSMs family. As usual in kernel machines (Cristianini and Shawe-Taylor, 2000), STKs directly compute the similarity between two text fragments x and y over their tree representations tx and ty, that is, STK(tx; ty). The function f that maps trees into vectors is only implicitly used, and, thus, STK(tx; ty) is not explicitly expressed as the dot product or the cosine between f(tx) and f(ty). Such a function f, which is the underlying reproducing function of the kernel (Aronszajn, 1950), is a CDSM since it maps trees to vectors by using distributional meaning. However, the huge nality of Rn (since it has to represent the set of all possible subtrees) prevents to actually compute the function f(t), which thus can only remain implicit. Distributed tree kernels (DTK) (Zanzotto and Dell'Arciprete, 2012) partially solve the last problem. DTKs approximate standard tree kernels (such as (Collins and Du y, 2002)) by de ning an explicit function DT that maps trees to vectors in Rm where m n and Rn is the explicit space for tree kernels. DTKs approximate standard tree kernels (TK), that is, hDT(tx);DT(ty)i TK(tx; ty), by approximating the corresponding reproducing function. Thus, these distributed trees are small vectors that encode structural information. In DTKs tree nodes u and v are represented by nearly orthonormal vectors, that is, vectors !u and !v such that h !u ; !v i ( !u ; !v ) where is the Kroneker's delta. This is in contrast with distributional semantics vectors where h !u ; !v i is allowed to be any value in [0; 1] according to the similarity between the words v and u. In this paper, leveraging on distributed trees, we present a novel class of CDSMs that encode both structure and distributional meaning: the distributed smoothed trees (DST). DSTs carry structure and distributional meaning on a rank-2 tensor (a matrix): one dimension encodes the structure and one dimension encodes the meaning. By using DSTs to compute the similarity among sentences with a generalized dot product (or cosine), we implicitly de ne the distributed smoothed tree kernels (DSTK) which approximate the corresponding STKs. We present two DSTs along with the two smoothed tree kernels (STKs) that they approximate. We experiment with our DSTs to show that their generalized dot products approximate STKs by directly comparing the produced similarities and by comparing their performances on two tasks: recognizing textual entailment (RTE) and semantic similarity detection (STS). Both experiments show that the dot product on DSTs approximates STKs and, thus, DSTs encode both structural and distributional semantics of text fragments in tractable rank-2 tensors. Experiments on STS and RTE show that distributional semantics encoded in DSTs increases performance over structure-only kernels. DSTs are the rst positive way of taking into account both structure and distributional meaning in CDSMs. The rest of the paper is organized as follows. Section 2.1 introduces the basic notation used in the paper. Section 2 describe our distributed smoothed trees as compositional distributional semantic models that can represent both structural and semantic information. Section 4 reports on the experiments. Finally, Section 5 draws some conclusions.
|
Distributed Smoothed Trees (DST) are a novel class of Compositional Distributional Se-mantics Models (CDSM) that effectively en-code structural information and distributional semantics in tractable rank-2 tensors, as ex-periments show. The paper shows that DSTs contribute to close the gap between two appar-ently different approaches: CDSMs and convo-lution kernels. This contribute to start a dis-cussion on a deeper understanding of the rep-resentation power of structural information of existing CDSMs.
| 22 |
Distributional Semantics
|
34_2014
| 2,014 |
Francesca Frontini, Valeria Quochi, Monica Monachini
|
Polysemy alternations extraction using the PAROLE SIMPLE CLIPS Italian lexicon
|
ENG
| 3 | 3 | 1 |
CNR-ILC
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Pisa
|
This paper presents the results of an experiment of polysemy alternations induction from a lexicon (Utt and Padó, 2011 Frontini et al., 2014), discussing the results and proposing an amendment in the original algorithm.
|
The various different senses of polysemic words do not always stand to each other in the same way. Some senses group together along certain dimensions of meaning while others stand clearly apart. Machine readable dictionaries have in the past used coarse grained sense distinctions but often without any explicit indication as to whether these senses were related or not. Most significantly, few machine readable dictionaries explicitly encode systematic alternations. In Utt and Pad´o (2011) a methodology is described for deriving systematic alternations of senses from WordNet. In Frontini et al. (2014) the work was carried out for Italian using the PAROLE SIMPLE CLIPS lexicon (PSC) (Lenci et al., 2000), a lexical resource that contains a rich set of explicit lexical and semantic relations. The purpose of the latter work was to test the methodology of the former work against the inventory of regular polysemy relations already encoded in the PSC semantic layer. It is important to notice that this was not possible in the original experiment, as WordNet does not contain such information. The result of the work done on PSC shows how the original methodology can be useful in testing the consistency of encoded polysemies and in finding gaps in individual lexical entries. At the same time the methodology is not infallible especially in distinguishing type alternations that frequently occur in the lexicon due to systematic polysemy from other alternations that are produced by metaphoric extensions, derivation or other non systematic sense shifting phenomena. In this paper we shall briefly outline the problem of lexical ambiguity; then describe the procedure of type induction carried out in the previous experiments, discussing the most problematic results; finally we will propose a change in the original methodology that seems more promising in capturing the essence of systematic polysemy.
|
To conclude, such preliminary results actually seem to confirm the hypothesis that measuring the association strength between types, rather than the frequency of their cooccurrence, is useful to capture the systematicity of an alternation. In future work it may be interesting to test ranking by other association measures (such as Log Likelihood) and with different filternigs. Finally, the original experiment may be repeated on both Italian and English WordNets in order to evaluate the new method on the original lexical resource.
| 7 |
Lexical and Semantic Resources and Analysis
|
35_2014
| 2,014 |
Gloria Gagliardi
|
Rappresentazione dei concetti azionali attraverso prototipi e accordo nella categorizzazione dei verbi generali. Una validazione statistica
|
ITA
| 1 | 1 | 1 |
Università di Firenze
| 1 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Florence
|
The article presents the results of a study aimed at assessing the consistency of the ca-tegorization of the business space operated by mother-language noteers for a set of semantically cohesive verbs of the IMA-GACT database (Ogirare’s semantic area). The statistical value-daction, articulated into three tests, is based on the calculation of the inter-tagger agreement in tasks of disambiguation of re-sented concepts using prototypes for images.
|
IMAGACT is an interlinguistic enthology that makes the spectrum of pragmatic variation-as associated with action predicates at average and high frequency in Italian and English (Moneglia et al., 2014). The classes of action that identify the reference entities of the language concepts, referred to in this lessical resource in the form of prototype scenes (Rosch, 1978), were in-docked by the body of spoken by madrelin-gua linguists, through a bottom-up procedure: the mate-rials linguistic were subjected to an arctic-lat procedure of notification described extensively in previous works (Moneglia et al., 2012; Frontini et al., 2012). The article illustrates the results of three tests aimed at assessing the consistency of the categorization of the action space proposed by the noteers for a short but semanticly consistent set of resource verbes: such choice was determined by the will to study at a high level of detail the problems related to the typization of the variation of the preached on events. The predisposition of this case-study is also favorable to the creation of a standard procedure, extended in a second time to statistically significant portions of the oncology for its complete validation. Paragraph 2 shall present the statist coefficients adopted, and Paragraph 3 shall describe the methodology and results of the tests carried out.
|
It is notorious that semantic note tasks, and in particular those dedicated to verbal lessic (Fellbaum, 1998; Fellbaum et al., 2001), record low levels of I.TA.5 In this case the possibility of obtaining high values, even with non-expert noteers, is likely due to the exclusively action and physical nature of the classes used for the categorization. Following the validation it was possible to use the data in applications of a psycholinguistic type (Gagliardi, 2014): the sample of verbs of oncology, broad and at the same time formally controlled, if fully validated could represent an unprecedented source of semantic data for cognitive sciences. For this purpose, as well as for a full educational and computing exploitation of the resource,6 in the near future the illustrated methodology will be extended to a quantitatively and statistically significant portion of the database.
| 7 |
Lexical and Semantic Resources and Analysis
|
36_2014
| 2,014 |
Michel Généreux, Egon W. Stemle, Lionel Nicolas, Verena Lyding
|
Correcting OCR errors for German in Fraktur font
|
ENG
| 4 | 1 | 0 |
EURAC
| 1 | 0 | 0 | 0 |
0
| 0 |
Michel Généreux, Egon W. Stemle, Lionel Nicolas, Verena Lyding
|
Italy
|
Bolzano
|
In this paper, we present on-going experiments for correcting OCR er-rors on German newspapers in Fraktur font. Our approach borrows from techniques for spelling correction in context using a prob-abilistic edit-operation error model and lexical resources. We highlight conditions in which high error reduction rates can be obtained and where the approach currently stands with real data.
|
The OPATCH project (Open Platform for Access to and Analysis of Textual Documents of Cul-tural Heritage) aims at creating an advanced online search infrastructure for research in an historical newspapers archive. The search experience is en-hanced by allowing for dedicated searches on per-son and place names as well as in defined subsec-tions of the newspapers. For implementing this, OPATCH builds on computational linguistic (CL) methods for structural parsing, word class tagging and named entity recognition (Poesio et al., 2011). The newspaper archive contains ten newspapers in German language from the South Tyrolean region for the time period around the First World War. Dating between 1910 and 1920, the newspapers are typed in the blackletter Fraktur font and paper quality is derogated due to age. Unfortunately, such material is challenging for optical character recognition (OCR), the process of transcribing printed text into computer readable text, which is the first necessary pre-processing step for any further CL processing. Hence, in OPATCH we are starting from majorly error-prone OCR-ed text, in quantities that cannot realistically be corrected manually. In this paper we present attempts to automate the procedure for correcting faulty OCR-ed text.
|
The approach we presented to correct OCR er-rors considered four features of two types: edit-distance and n-grams frequencies. Results show that a simple scoring system can correct OCR-ed texts with very high accuracy under idealized con-ditions: no more than two edit operations and a perfect dictionary. Obviously, these conditions do not always hold in practice, thus an observed er-ror reduction rate drops to 10%. Nevertheless, we can expect to improve our dictionary coverage so that very noisy OCR-ed texts (i.e. 48% error with distance of at least three to target) can be corrected with accuracies up to 20%. OCR-ed texts with less challenging error patterns can be corrected with accuracies up to 61% (distance 2) and 86% (dis-tance 1).
| 7 |
Lexical and Semantic Resources and Analysis
|
37_2014
| 2,014 |
Carlo Geraci, Alessandro Mazzei, Marco Angster
|
Some issues on Italian to LIS automatic translation. The case of train announcements
|
ENG
| 3 | 0 | 0 |
CNRS, Università di Torino, Libera Università di Bolzano
| 3 | 1 | 0 | 1 |
Carlo Geraci
| 0 |
0
|
France, Italy
|
Paris, Turin, Bolzano
|
In this paper we present some linguistic issues of an automatic transla-tor from Italian to Italian Sign Language (LIS) and how we addressed them.
|
Computational linguistic community showed a growing interest toward sign languages. Several projects of automatic translation into signed languages (SLs) recently started and avatar technology is becoming more and more popular as a tool for implementing automatic translation into SLs (Bangham et al. 2000, Zhao et al. 2000, Huenerfauth 2006, Morrissey et al. 2007, Su and Wu 2009). Current projects investigate relatively small domains in which avatars may perform decently, like post office announcements (Cox et al., 2002), weather forecasting (Verlinden et al., 2002), the jurisprudence of prayer (Almasoud and Al-Khalifa, 2011), driver’s license renewal (San-Segundo et al., 2012), and train announcements (e.g. Braffort et al. 2010, Ebling/Volk 2013). LIS4ALL is a project of automatic translation into LIS where we faced the domain of public transportation announcements. Specifically, we are developing a system of automatic translations of train station announcements from spoken Italian into LIS. The project is the prosecution of ATLAS, a project of automatic translation into LIS of weather forecasting (http://www.atlas. polito.it/index.php/en). In ATLAS two distinct approaches to automatic translation have been adopted, interlingua rule-based translation and statistical translation (Mazzei et al. 2013, Tiotto et al., 2010, Hutchins and Somer 1992). Both approaches have advantages and drawbacks in the specific context of automatic translation into SL. The statistical approach provides greater robustness while the symbolic approaches is more precise in the final results. A preliminary evaluation of the systems developed for ATLAS showed that both approaches have similar results. However, the symbolic approach we implemented produces the structure of the sentence in the target language. This information is used for the automatic allocation of the signs in the signing space for LIS (Mazzei et al. 2013), an aspect not yet implemented in current statistical approaches. LIS4ALL only uses the symbolic (rule-based) translation architecture to process the Italian input and generate the final LIS string. With respect to ATLAS, two main innovations characterize this project: new linguistic issues are addressed; the translation architecture is partially modified. As for the linguistic issues: we are enlarging the types of syntactic constructions covered by the avatar and we are increasing the electronic lexicon built for ATLAS (around 2350 signs) by adding new signs (around 120) specific to the railway domain. Indeed, this latter was one of the most challenging aspects of the project especially when the domain of train stations is addressed. Prima facie this issue would look like a special case of proper names, something that should be easily addressed by generating specific signs (basically one for every station). However, the solution is not as simple as it seems. Indeed, several problematic aspects are hidden when looking at the linguistic situation of names in LIS (and more generally in SL). As for the translation architecture, while in ATLAS a real interlingua translation with a deep parser and a FoL meaning representation were used, in LIS4ALL, we decided to employ a regular-expression-based analyzer that produces a simple (non recursive) filler/slot based semantic to parse the Italian input. This is so, because in the train announcement domain, input sentences have a large number of complex noun phrases with several prepositional phrases, resulting in a degraded parser performance (due to multiple attachment options). Moreover, the domain of application is extremely regular since the announcements are generated by predefined paths (RFI, 2011). The rest of the paper is organized as follows: Section 2 discusses the linguistic issues, Section 3 discusses the technical issues while Section 4 concludes the paper.
|
In this paper we considered two issues related to the development of an automatic translator from Italian to LIS in the railway domain. These are: 1) some syntactic mismatches between input and target languages; and 2) how to deal with lexical gaps due to unknown train station names. The first issue emerged in the creation of a parallel Italian-LIS corpus: the specificity of the domain allowed us to use a naive parser based on regular expressions, a semantic interpreter based on filler/slot semantics, a small CCG in generation. The second issue has been addressed by blending written text into a special “sign”. In the next future we plan to quantitatively evaluate our translator.
| 7 |
Lexical and Semantic Resources and Analysis
|
38_2014
| 2,014 |
Andrea Gobbi, Stefania Spina
|
ConParoleTue: crowdsourcing al servizio di un Dizionario delle Collocazioni Italiane per Apprendenti (Dici-A)
|
ITA
| 2 | 1 | 0 |
Università di Salerno, Università per Stranieri di Perugia
| 2 | 0 | 0 | 0 |
0
| 0 |
0
|
Italy
|
Salerno, Perugia
|
ConParoleTue is a crowdsourcing experiment in L2 lessicography. Starting from the establishment of a dictionary of locations for Italian L2, ConParoleTue represents an attempt to re-incorporate problems typical of lessicographic processing (the quality and the record of definitions) towards a greater center of the communication needs of those who learn. For this purpose, a crowdsourcing-based methodology is used for the drafting of definitions. This article describes this methodology and presents a first assessment of its results: the definitions obtained through crowdsourcing are quantitatively relevant and qualitatively suitable for non-native Italian speakers.
|
ConParoleTue (2012) is an application experiment of crowdsourcing within the framework of L2 lessicography, developed within the APRIL Project (Spina, 2010b) of the University for Foreigners of Perugia during the creation of a dictionary of locations for Italian L2 apprentices. The locations have occupied a leading place for several decades in studies on the learning of a second language (Meunier and Granger, 2008). This location is recognized as a key competence for an apprenticeship, because it plays a key role in the two aspects of production (for example, it provides pre-built and ready-to-use lessical blocks, improving fluency; Schmitt, 2004) and understanding (Lewis, 2000). Also within the scope of Italian lessicography the research on locations has been productive, and has led, in the last five years, to the publication of at least three paper dictionaries of Italian locations: Urz" (2009), born in the field of translation; Tiberii (2012) and Lo Cascio (2013). The DICI-A (Dizionario de las Colocaciones Italianas para Apprendentes; Spina, 2010a; 2010b) consists of the 11,400 Italian locations extracted from the Perugia Corpus, a reference corpus of Italian written and spoken contemporary1. Among the many proposals, the basic definition of the constitution of the DICI-A is that of Evert (2005), according to which a location is A word combination whose semantic and/or syntactic properties cannot be fully predicted from those of its components, and which therefore has to be listed in a lexiconÓ. The locations of the DICI-A belong to 9 different categories, selected on the basis of the most productive sequences of grammar categories that make up them: aggettive- name (tragic error), name-aggettive (the next year), name-name (weight form), verbo- (art.(a) name (to make a request/to make a penalty), name- preposition-name (credit card), aggettive- as-name (fresh like a rose), aggettive- conjunction-aggettive (healthy and safe), name-conjunction- name (card and pen), verboaggettive (cost expensive). For each location the Juilland index of dispersion and use (Bortolini et al., 1971) was calculated on the basis of the selected final location. The question is how to define them. In this context, the idea of the use of crowdsourcing, boring of ConParoleTue, was born.
|
The described experiment, which concerns crowdsourcing for the acquisition of Italian locations edited here, has proved effective both from the quantitative point of view (more than 3200 definitions five months) and from that of their appropriate thesis to a audience of learners to with definitions edited by a team of les grafi has highlighted the most intuitive and natural character of the definitions of non-specialists, despite the greater abstractness of the definitions of professionals. The results of the writings lead to continuing the writing of the dictionary through this crowdsourcing-based methodology.
| 8 |
Learner Corpora and Language Acquisition
|
39_2014
| 2,014 |
Iryna Haponchyk, Alessandro Moschitti
|
Making Latent SVMstruct Practical for Coreference Resolution
|
ENG
| 2 | 1 | 1 |
Università di Trento, Qatar Computing Research Institute
| 2 | 1 | 0 | 1 |
Alessandro Moschitti
| 0 |
0
|
Italy, Qatar
|
Trento, Rome, Ar-Rayyan
|
The recent work on coreference resolution has shown a renewed interest in the structured perceptron model, which seems to achieve the state of the art in this field. Interestingly, while SVMs are known to generally provide higher accu-racy than a perceptron, according to pre-vious work and theoretical findings, no re-cent paper currently describes the use of SVMstruct for coreference resolution. In this paper, we address this question by solving some technical problems at both theoretical and algorithmic level enabling the use of SVMs for coreference resolu-tion and other similar structured output tasks (e.g., based on clustering).
|
Coreference resolution (CR) is a complex task, in which document phrases (mentions) are partitioned into equivalence sets. It has recently been approached by applying learning algorithms operating in structured output spaces (Tsochantaridis et al., 2004). Considering the nature of the problem, i.e., the NP-hardness of finding optimal mention clusters, the task has been reformulated as a spanning graph problem. First, Yu and Joachims (2009) proposed to (i) represent all possible mention clusters with fully connected undirected graphs and (ii) infer document mention cluster sets by applying Kruskal’s spanning algorithm (Kruskal, 1956). Since the same clustering can be obtained from multiple spanning forests (there is no one-to-one correspondence), these latter are treated as hidden or latent variables. Therefore, an extension of the structural SVM – Latent SVMstruct (LSVM) – was designed to include these structures in the learning procedure. Later, Fernandes et al. (2012) presented their CR system having a resembling architecture. They do inference on a directed candidate graph using the algorithm of Edmonds (1967). This modeling coupled with the latent structured perceptron delivered state-of-the-art results in the CoNLL-2012 Shared Task (Pradhan et al., 2012). To the best of our knowledge, there is no previous work on a comparison of the two methods, and the LSVM approach of Yu and Joachims has not been applied to the CoNLL data. In our work, we aim, firstly, at evaluating LSVM with respect to the recent benchmark standards (corpus and evaluation metrics defined by the CoNLL-shared task) and, secondly, at understanding the differences and advantages of the two structured learning models. In a closer look at the LSVM implementation1, we found out that it is restricted to inference on a fully-connected graph. Thus, we provide an extension of the algorithm enabling to operate on an arbitrary graph: this is very important as all the best CR models exploit heuristics to prefilter edges of the CR graph. Therefore our modification of LSVM allows us to use it with powerful heuristics, which greatly contribute to the achievement of the state of the art. Regarding the comparison with the latent perceptron of Fernandes et al. (2012), the results of our experiments provide evidence that the latent trees derived by Edmonds’ spanning tree algorithm better capture the nature of CR. Therefore, we speculate that the use of this spanning tree algorithm within LSVM may produce higher results than those of the current perceptron algorithm.
|
We have performed a comparative analysis of the structured prediction frameworks for coref-erence resolution. Our experiments reveal that the graph modelling of Fernandes et al. and Ed-monds’ spanning algorithm seem to tackle the task more specifically. As a short-term future work, we intend to verify if LSVM benefits from us-ing Edmonds’ algorithm. We have also enabled the LSVM implementation to operate on partial graphs, which allows the framework to be com-bined with different filtering strategies and facili-tates its comparison with other systems.
| 7 |
Lexical and Semantic Resources and Analysis
|
CLiC-it Corpus
This repository contains the CLiC-it Corpus, a structured dataset of all the papers presented at the CLiC-it conferences from 2014 to 2024. The dataset is part of the paper Charting a Decade of Computational Linguistics in Italy: The CLiC-it Corpus. If you use this dataset in your work, we kindly ask you to cite our paper:
@article{alzetta2025charting,
title={Charting a Decade of Computational Linguistics in Italy: The CLiC-it Corpus},
author={Alzetta, Chiara and Auriemma, Serena and Bondielli, Alessandro and Dini, Luca and Fazzone, Chiara and Miaschi, Alessio and Miliani, Martina and Sartor, Marta},
journal={arXiv preprint arXiv:2509.19033},
year={2025}
}
Abstract
Over the past decade, Computational Linguistics (CL) and Natural Language Processing (NLP) have evolved rapidly, especially with the advent of Transformer-based Large Language Models (LLMs). This shift has transformed research goals and priorities, from Lexical and Semantic Resources to Language Modelling and Multimodality. In this study, we track the research trends of the Italian CL and NLP community through an analysis of the contributions to CLiC-it, arguably the leading Italian conference in the field. We compile the proceedings from the first 10 editions of the CLiC-it conference (from 2014 to 2024) into the CLiC-it Corpus, providing a comprehensive analysis of both its metadata, including author provenance, gender, affiliations, and more, as well as the content of the papers themselves, which address various topics. Our goal is to provide the Italian and international research communities with valuable insights into emerging trends and key developments over time, supporting informed decisions and future directions in the field.
Dataset Overview
Each entry in the corpus corresponds to a paper presented at CLiC-it and includes metadata on authorship, affiliations, gender indicators, and textual features.
Fields
The dataset includes the following fields:
id: A unique identifier assigned to each paper.year: The year in which the paper was presented at the CLiC-it conference.authors: The names of all contributing authors.title: The title of the paper.language: The language in which the paper was written (ENGfor English,ITAfor Italian).num_authors: The total number of authors.num_women_authors: The number of female authors.woman_as_first_author: A binary indicator (1or0) reflecting whether the first author is a woman.affiliations: The institutional affiliations of the authors.num_affiliations: The number of distinct affiliations.at_least_one_international_affiliation: A binary value (1or0) denoting whether at least one author is affiliated with a non-Italian institution.international_affiliation_only: A binary value (1or0) indicating whether all authors are affiliated exclusively with international (non-Italian) institutions.international_authors: The number of authors with non-Italian affiliations.names_international_authors: The names of authors with non-Italian affiliations.num_company_authors: The number of authors affiliated with corporate (non-academic) institutions.names_company_authors: The names of authors working in corporate settings.countries_affiliations: The countries represented by the affiliations.cities_affiliations: The cities of the affiliations.abstract: The paper abstract.introduction: The Introduction section of the paper.conclusion: The Conclusions section of the paper.num_topics: The number of topics extracted with BERTopic.topics: The topics extracted with BERTopic.
- Downloads last month
- 24