query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 7
25
| subset
stringclasses 5
values |
---|---|---|---|---|
539b8778fa5e2573c9d6a1c3627ba881
|
The development of reading in children who speak English as a second language.
|
[
{
"docid": "4272b4a73ecd9d2b60e0c60de0469f17",
"text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.",
"title": ""
}
] |
[
{
"docid": "ed06666ec688b6a57b2f3eaa57853dcd",
"text": "Sensor fusion is indispensable to improve accuracy and robustness in an autonomous navigation setting. However, in the space of end-to-end sensorimotor control, this multimodal outlook has received limited attention. In this work, we propose a novel stochastic regularization technique, called Sensor Dropout, to robustify multimodal sensor policy learning outcomes. We also introduce an auxiliary loss on policy network along with the standard DRL loss in order to reduce variance in actions of the multimodal sensor policy. Through extensive empirical testing, we demonstrate that our proposed policy can 1) operate with minimal performance drop in noisy environments and 2) remain functional even in the face of a sensor subset failure. Finally, through the visualization of gradients, we show that the learned policies are conditioned on the same latent input distribution despite having multiple and diverse observations spaces a hallmark of true sensorfusion. This efficacy of a multimodal sensor policy is shown through simulations on TORCS, a popular open-source racing car game. A demo video can be seen here: https://youtu.be/HC3TcJjXf3Q.",
"title": ""
},
{
"docid": "5325138fcbb52c61903e7bb9bd1c890b",
"text": "To simulate an efficient Intrusion Detection System (IDS) model, enormous amount of data are required to train and testing the model. To improve the accuracy and efficiency of the model, it is essential to infer the statistical properties from the observable elements of th e dataset. In this work, we have proposed some data preprocessing techniques such as filling the missing values, removing redundant samples, reduce the dimension, selecting most relevant features and finally, normalize the samples. After data preprocessing, we have simulated and tested the dataset by applying various data mining algorithms such as Support Vector Machine (SVM), Decision Tree, K nearest neighbor, K-Mean and Fuzzy C-Mean Clustering which provides better result in less computational time.",
"title": ""
},
{
"docid": "51a9180623be4ddaf514377074edc379",
"text": "Breast region measurements are important for research, but they may also become significant in the legal field as a quantitative tool for preoperative and postoperative evaluation. Direct anthropometric measurements can be taken in clinical practice. The aim of this study was to compare direct breast anthropometric measurements taken with a tape measure and a compass. Forty women, aged 18–60 years, were evaluated. They had 14 anatomical landmarks marked on the breast region and arms. The union of these points formed eight linear segments and one angle for each side of the body. The volunteers were evaluated by direct anthropometry in a standardized way, using a tape measure and a compass. Differences were found between the tape measure and the compass measurements for all segments analyzed (p > 0.05). Measurements obtained by tape measure and compass are not identical. Therefore, once the measurement tool is chosen, it should be used for the pre- and postoperative measurements in a standardized way. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "39f5413a937587b3afc9bbd9ee4b735f",
"text": "examples in learning math. Science, 320(5875), 454–455. doi: 10.1126/science.1154659 Kaminski, J. A., Sloutsky, V. M., & Heckler, A. (2009). Transfer of mathematical knowledge: The portability of generic instantiations. Child Development Perspectives, 3(3), 151–155. doi:10.1111/j.1750-8606",
"title": ""
},
{
"docid": "5f42f43bf4f46b821dac3b0d0be2f63a",
"text": "The autonomous overtaking maneuver is a valuable technology in unmanned vehicle field. However, overtaking is always perplexed by its security and time cost. Now, an autonomous overtaking decision making method based on deep Q-learning network is proposed in this paper, which employs a deep neural network(DNN) to learn Q function from action chosen to state transition. Based on the trained DNN, appropriate action is adopted in different environments for higher reward state. A series of experiments are performed to verify the effectiveness and robustness of our proposed approach for overtaking decision making based on deep Q-learning method. The results support that our approach achieves better security and lower time cost compared with traditional reinforcement learning methods.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "a30c2a8d3db81ae121e62af5994d3128",
"text": "Recent advances in the fields of robotics, cyborg development, moral psychology, trust, multi agent-based systems and socionics have raised the need for a better understanding of ethics, moral reasoning, judgment and decision-making within the system of man and machines. Here we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the social moral norms at the collective end. We review salient works in the fields of trust and machine ethics research, underscore the importance and the need for a deeper understanding of ethical trust at the individual level and the development of collective social moral norms. Drawing upon the recent findings from neural sciences on mirror-neuron system (MNS) and social cognition, we present a bio-inspired Computational Model of Ethical Trust (CMET) to allow investigations of the interplay of ethical trust and social moral norms.",
"title": ""
},
{
"docid": "7aa07ba3e04a79cf51dfc9c42b415628",
"text": "A model is presented that permits the calculation of densities of 60-Hz magnetic fields throughout a residence from only a few measurements. We assume that residential magnetic fields are produced by sources external to the house and by the residential grounding circuit. The field from external sources is measured with a single probe. The field produced by the grounding circuit is calculated from the current flowing in the circuit and its geometry. The two fields are combined to give a prediction of the total field at any point in the house. A data-acquisition system was built to record the magnitude and phase of the grounding current and the field from external sources. The model's predictions were compared with measurements of the total magnetic field at a single location in 23 houses; a correlation coefficient of .87 was obtained, indicating that the model has good predictive capability. A more detailed study that was carried out in one house permitted comparisons of measurements with the model's predictions at locations throughout the house. Again, quite reasonable agreement was found. We also investigated the temporal variability of field readings in this house. Daily magnetic field averages were found to be considerably more stable than hourly averages. Finally, we demonstrate the use of the model in creating a profile of the magnetic fields in a home.",
"title": ""
},
{
"docid": "4a9913930e2e07b867cc701b07e88eaa",
"text": "There is little doubt that the incidence of depression in Britain is increasing. According to research at the Universities of London and Warwick, the incidence of depression among young people has doubled in the past 12 years. However, whether young or old, the question is why and what can be done? There are those who argue that the increasingly common phenomenon of depression is primarily psychological, and best dealt with by counselling. There are others who consider depression as a biochemical phenomenon, best dealt with by antidepressant medication. However, there is a third aspect to the onset and treatment of depression that is given little heed: nutrition. Why would nutrition have anything to do with depression? Firstly, we have seen a significant decline in fruit and vegetable intake (rich in folic acid), in fish intake (rich in essential fats) and an increase in sugar consumption, from 2 lb a year in the 1940s to 150 lb a year in many of today’s teenagers. Each of these nutrients is strongly linked to depression and could, theoretically, contribute to increasing rates of depression. Secondly, if depression is a biochemical imbalance it makes sense to explore how the brain normalises its own biochemistry, using nutrients as the precursors for key neurotransmitters such as serotonin. Thirdly, if 21st century living is extra-stressful, it would be logical to assume that increasing psychological demands would also increase nutritional requirements since the brain is structurally and functionally completely dependent on nutrients. So, what evidence is there to support suboptimal nutrition as a potential contributor to depression? These are the common imbalances connected to nutrition that are known to worsen your mood and motivation:",
"title": ""
},
{
"docid": "d42bbb6fe8d99239993ed01aa44c32ef",
"text": "Chemical communication plays a very important role in the lives of many social insects. Several different types of pheromones (species-specific chemical messengers) of ants have been described, particularly those involved in recruitment, recognition, territorial and alarm behaviours. Properties of pheromones include activity in minute quantities (thus requiring sensitive methods for chemical analysis) and specificity (which can have chemotaxonomic uses). Ants produce pheromones in various exocrine glands, such as the Dufour, poison, pygidial and mandibular glands. A wide range of substances have been identified from these glands.",
"title": ""
},
{
"docid": "82ef80d6257c5787dcf9201183735497",
"text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.",
"title": ""
},
{
"docid": "2528b23554f934a67b3ed66f7df9d79e",
"text": "In this paper, we implemented an approach to predict final exam scores from early course assessments of the students during the semester. We used a linear regression model to check which part of the evaluation of the course assessment affects final exam score the most. In addition, we explained the origins of data mining and data mining in education. After preprocessing and preparing data for the task in hand, we implemented the linear regression model. The results of our work show that quizzes are most accurate predictors of final exam scores compared to other kinds of assessments.",
"title": ""
},
{
"docid": "6d4cd80341c429ecaaccc164b1bde5f9",
"text": "One hundred and two olive RAPD profiles were sampled from all around the Mediterranean Basin. Twenty four clusters of RAPD profiles were shown in the dendrogram based on the Ward’s minimum variance algorithm using chi-square distances. Factorial discriminant analyses showed that RAPD profiles were correlated with the use of the fruits and the country or region of origin of the cultivars. This suggests that cultivar selection has occurred in different genetic pools and in different areas. Mitochondrial DNA RFLP analyses were also performed. These mitotypes supported the conclusion also that multilocal olive selection has occurred. This prediction for the use of cultivars will help olive growers to choose new foreign cultivars for testing them before an eventual introduction if they are well adapted to local conditions.",
"title": ""
},
{
"docid": "e910310c5cc8357c570c6c4110c4e94f",
"text": "Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.",
"title": ""
},
{
"docid": "eaae33cb97b799eff093a7a527143346",
"text": "RGB Video now is one of the major data sources of traffic surveillance applications. In order to detect the possible traffic events in the video, traffic-related objects, such as vehicles and pedestrians, should be first detected and recognized. However, due to the 2D nature of the RGB videos, there are technical difficulties in efficiently detecting and recognizing traffic-related objects from them. For instance, the traffic-related objects cannot be efficiently detected in separation while parts of them overlap, and complex background will influence the accuracy of the object detection. In this paper, we propose a robust RGB-D data based traffic scene understanding algorithm. By integrating depth information, we can calculate more discriminative object features and spatial information can be used to separate the objects in the scene efficiently. Experimental results show that integrating depth data can improve the accuracy of object detection and recognition. We also show that the analyzed object information plus depth data facilitate two important traffic event detection applications: overtaking warning and collision",
"title": ""
},
{
"docid": "c57d4b7ea0e5f7126329626408f1da2d",
"text": "Educational Data Mining (EDM) is an interdisciplinary ingenuous research area that handles the development of methods to explore data arising in a scholastic fields. Computational approaches used by EDM is to examine scholastic data in order to study educational questions. As a result, it provides intrinsic knowledge of teaching and learning process for effective education planning. This paper conducts a comprehensive study on the recent and relevant studies put through in this field to date. The study focuses on methods of analysing educational data to develop models for improving academic performances and improving institutional effectiveness. This paper accumulates and relegates literature, identifies consequential work and mediates it to computing educators and professional bodies. We identify research that gives well-fortified advice to amend edifying and invigorate the more impuissant segment students in the institution. The results of these studies give insight into techniques for ameliorating pedagogical process, presaging student performance, compare the precision of data mining algorithms, and demonstrate the maturity of open source implements.",
"title": ""
},
{
"docid": "5f5c78b74e1e576dd48690b903bf4de4",
"text": "Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.",
"title": ""
},
{
"docid": "7fc10687c97d2219ce8555dd92baf57c",
"text": "The wind-induced response of tall buildings is inherently sensitive to structural dynamic properties like frequency and damping ratio. The latter parameter in particular is fraught with uncertainty in the design stage and may result in a built structure whose acceleration levels exceed design predictions. This reality has motivated the need to monitor tall buildings in full-scale. This paper chronicles the authors’ experiences in the analysis of full-scale dynamic response data from tall buildings around the world, including full-scale datasets from high rises in Boston, Chicago, and Seoul. In particular, this study focuses on the effects of coupling, beat phenomenon, amplitude dependence, and structural system type on dynamic properties, as well as correlating observed periods of vibration against fi nite element predictions. The fi ndings suggest the need for time–frequency analyses to identify coalescing modes and the mechanisms spurring them. The study also highlighted the effect of this phenomenon on damping values, the overestimates that can result due to amplitude dependence, as well as the comparatively larger degree of energy dissipation experienced by buildings dominated by frame action. Copyright © 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "f6b4ab40746d0c8c7e2b0113402667a9",
"text": "This paper presents a method for measuring the semantic similarity between concepts in Knowledge Graphs (KGs) such as WordNet and DBpedia. Previous work on semantic similarity methods have focused on either the structure of the semantic network between concepts (e.g., path length and depth), or only on the Information Content (IC) of concepts. We propose a semantic similarity method, namely wpath, to combine these two approaches, using IC to weight the shortest path length between concepts. Conventional corpus-based IC is computed from the distributions of concepts over textual corpus, which is required to prepare a domain corpus containing annotated concepts and has high computational cost. As instances are already extracted from textual corpus and annotated by concepts in KGs, graph-based IC is proposed to compute IC based on the distributions of concepts over instances. Through experiments performed on well known word similarity datasets, we show that the wpath semantic similarity method has produced a statistically significant improvement over other semantic similarity methods. Moreover, in a real category classification evaluation, the wpath method has shown the best performance in terms of accuracy and F score.",
"title": ""
},
{
"docid": "72c79181572c836cb92aac8fe7a14c5d",
"text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).",
"title": ""
}
] |
scidocsrr
|
7c91f8804822ca77c4c1a48f78bfdd61
|
A Simple Model for Classifying Web Queries by User Intent
|
[
{
"docid": "28ea3d754c1a28ccfeb8a6e884898f96",
"text": "Understanding users'search intent expressed through their search queries is crucial to Web search and online advertisement. Web query classification (QC) has been widely studied for this purpose. Most previous QC algorithms classify individual queries without considering their context information. However, as exemplified by the well-known example on query \"jaguar\", many Web queries are short and ambiguous, whose real meanings are uncertain without the context information. In this paper, we incorporate context information into the problem of query classification by using conditional random field (CRF) models. In our approach, we use neighboring queries and their corresponding clicked URLs (Web pages) in search sessions as the context information. We perform extensive experiments on real world search logs and validate the effectiveness and effciency of our approach. We show that we can improve the F1 score by 52% as compared to other state-of-the-art baselines.",
"title": ""
}
] |
[
{
"docid": "5ebdf5b9986df77e6b10bcf820b41a6c",
"text": "Many neural networks can be regarded as attempting to approximate a multivariate function in terms of one-input one-output units. This note considers the problem of an exact representation of nonlinear mappings in terms of simpler functions of fewer variables. We review Kolmogorov's theorem on the representation of functions of several variables in terms of functions of one variable and show that it is irrelevant in the context of networks for learning.",
"title": ""
},
{
"docid": "42303331bf6713c1809468532c153693",
"text": "................................................................................................................................................ V Table of",
"title": ""
},
{
"docid": "36c26d1be5d9ef1ffaf457246bbc3c90",
"text": "In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-ofvocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-ofvocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.",
"title": ""
},
{
"docid": "aeb12453020541d2465438e0868f6402",
"text": "Location-based Services are emerging as popular applications in pervasive computing. Spatial k-anonymity is used in Locationbased Services to protect privacy, by hiding the association of a specific query with a specific user. Unfortunately, this approach fails in many practical cases such as: (i) personalized services, where the user identity is required, or (ii) applications involving groups of users (e.g., employees of the same company); in this case, associating a query to any member of the group, violates privacy. In this paper, we introduce the concept of Location Diversity, which solves the above-mentioned problems. Location Diversity improves Spatial k-anonymity by ensuring that each query can be associated with at least ` different semantic locations (e.g., school, shop, hospital, etc). We present an attack model that maps each observed query to a linear equation involving semantic locations, and we show that a necessary condition to preserve privacy is the existence of infinite solutions in the resulting system of linear equations. Based on this observation, we develop algorithms that generate groups of semantic locations, which preserve privacy and minimize the expected query processing and communication cost. The experimental evaluation demonstrates that our approach reduces significantly the privacy threats, while incurring minimal overhead.",
"title": ""
},
{
"docid": "6e893839d1d4698698d38eb18073251a",
"text": "Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.",
"title": ""
},
{
"docid": "1dc7b9dc4f135625e2680dcde8c9e506",
"text": "This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \\informative\" e ects of advertising and \\prestige\" or \\image\" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 (ackerber@bu.edu). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.",
"title": ""
},
{
"docid": "d4954bab5fc4988141c509a6d6ab79db",
"text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.",
"title": ""
},
{
"docid": "2e3cee13657129d26ec236f9d2641e6c",
"text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "91e93ebb9503a83f20d349d87d8f74dd",
"text": "Data stream mining is an active research area that has recently emerged to discover knowledge from large amounts of continuously generated data. In this context, several data stream clustering algorithms have been proposed to perform unsupervised learning. Nevertheless, data stream clustering imposes several challenges to be addressed, such as dealing with nonstationary, unbounded data that arrive in an online fashion. The intrinsic nature of stream data requires the development of algorithms capable of performing fast and incremental processing of data objects, suitably addressing time and memory limitations. In this article, we present a survey of data stream clustering algorithms, providing a thorough discussion of the main design components of state-of-the-art algorithms. In addition, this work addresses the temporal aspects involved in data stream clustering, and presents an overview of the usually employed experimental methodologies. A number of references are provided that describe applications of data stream clustering in different domains, such as network intrusion detection, sensor networks, and stock market analysis. Information regarding software packages and data repositories are also available for helping researchers and practitioners. Finally, some important issues and open questions that can be subject of future research are discussed.",
"title": ""
},
{
"docid": "21d9828d0851b4ded34e13f8552f3e24",
"text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.",
"title": ""
},
{
"docid": "89d4143e7845d191433882f3fa5aaa26",
"text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation",
"title": ""
},
{
"docid": "b26882cddec1690e3099757e835275d2",
"text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.",
"title": ""
},
{
"docid": "9af37841feed808345c39ee96ddff914",
"text": "Wake-up receivers (WuRXs) are low-power radios that continuously monitor the RF environment to wake up a higher-power radio upon detection of a predetermined RF signature. Prior-art WuRXs have 100s of kHz of bandwidth [1] with low signature-to-wake-up-signal latency to help synchronize communication amongst nominally asynchronous wireless devices. However, applications such as unattended ground sensors and smart home appliances wake-up infrequently in an event-driven manner, and thus WuRX bandwidth and latency are less critical; instead, the most important metrics are power consumption and sensitivity. Unfortunately, current state-of-the-art WuRXs utilizing direct envelope-detecting [2] and IF/uncertain-IF [1,3] architectures (Fig. 24.5.1) achieve only modest sensitivity at low-power (e.g., −39dBm at 104nW [2]), or achieve excellent sensitivity at higher-power (e.g., −97dBm at 99µW [3]) via active IF gain elements. Neither approach meets the needs of next-generation event-driven sensing networks.",
"title": ""
},
{
"docid": "af004fad4aa8b4ce414c0d36250f20b5",
"text": "Software developers often face steep learning curves in using a new framework, library, or new versions of frameworks for developing their piece of software. In large organizations, developers learn and explore use of frameworks, rarely realizing, several peers may have already explored the same. A tool that helps locate samples of code, demonstrating use of frameworks or libraries would provide benefits of reuse, improved code quality and faster development. This paper describes an approach for locating common samples of source code from a repository by providing extensions to an information retrieval system. The approach improves the existing approaches in two ways. First, it provides the scalability of an information retrieval system, supporting search over thousands of source code files of an organization. Second, it provides more specific search on source code by preprocessing source code files and understanding elements of the code as opposed to considering code as plain text.",
"title": ""
},
{
"docid": "ea4da468a0e7f84266340ba5566f4bdb",
"text": "We present a novel realtime algorithm to compute the trajectory of each pedestrian in a crowded scene. Our formulation is based on an adaptive scheme that uses a combination of deterministic and probabilistic trackers to achieve high accuracy and efficiency simultaneously. Furthermore, we integrate it with a multi-agent motion model and local interaction scheme to accurately compute the trajectory of each pedestrian. We highlight the performance and benefits of our algorithm on well-known datasets with tens of pedestrians.",
"title": ""
},
{
"docid": "285587e0e608d8bafa0962b5cf561205",
"text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.",
"title": ""
},
{
"docid": "e7bbef4600048504c8019ff7fdb4758c",
"text": "Convenient assays for superoxide dismutase have necessarily been of the indirect type. It was observed that among the different methods used for the assay of superoxide dismutase in rat liver homogenate, namely the xanthine-xanthine oxidase ferricytochromec, xanthine-xanthine oxidase nitroblue tetrazolium, and pyrogallol autoxidation methods, a modified pyrogallol autoxidation method appeared to be simple, rapid and reproducible. The xanthine-xanthine oxidase ferricytochromec method was applicable only to dialysed crude tissue homogenates. The xanthine-xanthine oxidase nitroblue tetrazolium method, either with sodium carbonate solution, pH 10.2, or potassium phosphate buffer, pH 7·8, was not applicable to rat liver homogenate even after extensive dialysis. Using the modified pyrogallol autoxidation method, data have been obtained for superoxide dismutase activity in different tissues of rat. The effect of age, including neonatal and postnatal development on the activity, as well as activity in normal and cancerous human tissues were also studied. The pyrogallol method has also been used for the assay of iron-containing superoxide dismutase inEscherichia coli and for the identification of superoxide dismutase on polyacrylamide gels after electrophoresis.",
"title": ""
},
{
"docid": "a28917b48a9107b1d06885d7151f393b",
"text": "Logistic regression is an increasingly popular statistical technique used to model the probability of discrete (i.e., binary or multinomial) outcomes. When properly applied, logistic regression analyses yield very powerful insights in to what attributes (i.e., variables) are more or less likely to predict event outcome in a population of interest. These models also show the extent to which changes in the values of the attributes may increase or decrease the predicted probability of event outcome.",
"title": ""
},
{
"docid": "36867b8478a8bd6be79902efd5e9d929",
"text": "Most state-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementation and evaluation of a multi-dimensional storage virtualization system called Stonehenge, which is able to virtualize a cluster-based physical storage system along multiple dimensions, including bandwidth, capacity, and latency. As a result, Stonehenge is able to multiplex multiple virtual disks, each with a distinct bandwidth, capacity, and latency attribute, on a single physical storage system as if they are separate physical disks. A key enabling technology for Stonehenge is an efficiency-aware real-time disk scheduling algorithm called dual-queue disk scheduling, which maximizes disk utilization efficiency while providing Quality of Service (QoS) guarantees. To optimize disk utilization efficiency, Stonehenge exploits run-time measurements extensively, for admission control, computing latency-derived bandwidth requirement, and predicting disk service time.",
"title": ""
}
] |
scidocsrr
|
8913aeaeb31812ab614555aa4dc52714
|
Sleep timing is more important than sleep length or quality for medical school performance.
|
[
{
"docid": "5a1b5f961bf6ed78cff2df6e2ed2d212",
"text": "The transition from wakefulness to sleep is marked by pronounced changes in brain activity. The brain rhythms that characterize the two main types of mammalian sleep, slow-wave sleep (SWS) and rapid eye movement (REM) sleep, are thought to be involved in the functions of sleep. In particular, recent theories suggest that the synchronous slow-oscillation of neocortical neuronal membrane potentials, the defining feature of SWS, is involved in processing information acquired during wakefulness. According to the Standard Model of memory consolidation, during wakefulness the hippocampus receives input from neocortical regions involved in the initial encoding of an experience and binds this information into a coherent memory trace that is then transferred to the neocortex during SWS where it is stored and integrated within preexisting memory traces. Evidence suggests that this process selectively involves direct connections from the hippocampus to the prefrontal cortex (PFC), a multimodal, high-order association region implicated in coordinating the storage and recall of remote memories in the neocortex. The slow-oscillation is thought to orchestrate the transfer of information from the hippocampus by temporally coupling hippocampal sharp-wave/ripples (SWRs) and thalamocortical spindles. SWRs are synchronous bursts of hippocampal activity, during which waking neuronal firing patterns are reactivated in the hippocampus and neocortex in a coordinated manner. Thalamocortical spindles are brief 7-14 Hz oscillations that may facilitate the encoding of information reactivated during SWRs. By temporally coupling the readout of information from the hippocampus with conditions conducive to encoding in the neocortex, the slow-oscillation is thought to mediate the transfer of information from the hippocampus to the neocortex. Although several lines of evidence are consistent with this function for mammalian SWS, it is unclear whether SWS serves a similar function in birds, the only taxonomic group other than mammals to exhibit SWS and REM sleep. Based on our review of research on avian sleep, neuroanatomy, and memory, although involved in some forms of memory consolidation, avian sleep does not appear to be involved in transferring hippocampal memories to other brain regions. Despite exhibiting the slow-oscillation, SWRs and spindles have not been found in birds. Moreover, although birds independently evolved a brain region--the caudolateral nidopallium (NCL)--involved in performing high-order cognitive functions similar to those performed by the PFC, direct connections between the NCL and hippocampus have not been found in birds, and evidence for the transfer of information from the hippocampus to the NCL or other extra-hippocampal regions is lacking. Although based on the absence of evidence for various traits, collectively, these findings suggest that unlike mammalian SWS, avian SWS may not be involved in transferring memories from the hippocampus. Furthermore, it suggests that the slow-oscillation, the defining feature of mammalian and avian SWS, may serve a more general function independent of that related to coordinating the transfer of information from the hippocampus to the PFC in mammals. Given that SWS is homeostatically regulated (a process intimately related to the slow-oscillation) in mammals and birds, functional hypotheses linked to this process may apply to both taxonomic groups.",
"title": ""
},
{
"docid": "06e74a431b45aec75fb21066065e1353",
"text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.",
"title": ""
},
{
"docid": "ec36f7ad0a916ab4040b0fddbf7b1172",
"text": "To review the state of research on the association between sleep among school-aged children and academic outcomes, the authors reviewed published studies investigating sleep, school performance, and cognitive and achievement tests. Tables with brief descriptions of each study's research methods and outcomes are included. Research reveals a high prevalence among school-aged children of suboptimal amounts of sleep and poor sleep quality. Research demonstrates that suboptimal sleep affects how well students are able to learn and how it may adversely affect school performance. Recommendations for further research are discussed.",
"title": ""
}
] |
[
{
"docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa",
"text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking",
"title": ""
},
{
"docid": "9680944f9e6b4724bdba752981845b68",
"text": "A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing.",
"title": ""
},
{
"docid": "62c4ad2cdd38d8ab8e08bd6636cb3e09",
"text": "When modeling resonant inverters considering the harmonic balance method, the order of the obtained transfer functions is twice the state variables number. This is explained because two components are considered for each state variable. In order to obtain a simpler transfer function model of a halfbridge series resonant inverter, different techniques of model order reduction have been considered in this work. Thus, a reduced-order model has been obtained by residualization providing much simpler analytical expressions than the original model. The proposed model has been validated by simulation and experimentally. The validity range of the proposed model is extended up to a tenth of the switching frequency. Taking into account the great load variability of induction heating applications, the proposed reduced-order model will allow the design of advanced controllers such as Gain-Scheduling.",
"title": ""
},
{
"docid": "3f9a46f472ab276c39fb96b78df132ee",
"text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.",
"title": ""
},
{
"docid": "998f2515ea7ceb02f867b709d4a987f9",
"text": "Crop pest and disease diagnosis are amongst important issues arising in the agriculture sector since it has significant impacts on the production of agriculture for a nation. The applying of expert system technology for crop pest and disease diagnosis has the potential to quicken and improve advisory matters. However, the development of an expert system in relation to diagnosing pest and disease problems of a certain crop as well as other identical research works remains limited. Therefore, this study investigated the use of expert systems in managing crop pest and disease of selected published works. This article aims to identify and explain the trends of methodologies used by those works. As a result, a conceptual framework for managing crop pest and disease was proposed on basis of the selected previous works. This article is hoped to relatively benefit the growth of research works pertaining to the development of an expert system especially for managing crop pest and disease in the agriculture domain.",
"title": ""
},
{
"docid": "42f3032626b2a002a855476a718a2b1b",
"text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.",
"title": ""
},
{
"docid": "7faed0b112a15a3b53c94df44a1bcb26",
"text": "Since the stability of the method of fundamental solutions (MFS) is a severe issue, the estimation on the bounds of condition number Cond is important to real application. In this paper, we propose the new approaches for deriving the asymptotes of Cond, and apply them for the Dirichlet problem of Laplace’s equation, to provide the sharp bound of Cond for disk domains. Then the new bound of Cond is derived for bounded simply connected domains with mixed types of boundary conditions. Numerical results are reported for Motz’s problem by adding singular functions. The values of Cond grow exponentially with respect to the number of fundamental solutions used. Note that there seems to exist no stability analysis for the MFS on non-disk (or non-elliptic) domains. Moreover, the expansion coefficients obtained by the MFS are oscillatingly large, to cause the other kind of instability: subtraction cancelation errors in the final harmonic solutions.",
"title": ""
},
{
"docid": "4e8d7e1fdb48da4198e21ae1ef2cd406",
"text": "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95% compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pretrain action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics [14], UCF-101 [30] and HMDB-51 [15], models pre-trained on SLAC outperform baselines trained from scratch, by 2.0%, 20.1% and 35.4% in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 [12] and ActivityNet-v1.3[2], our localization model improves the mAP of baseline model by 8.6% and 2.5%, respectively.",
"title": ""
},
{
"docid": "c5113ff741d9e656689786db10484a07",
"text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.",
"title": ""
},
{
"docid": "0ee09adae30459337f8e7261165df121",
"text": "Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.",
"title": ""
},
{
"docid": "9b94a383b2a6e778513a925cc88802ad",
"text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was largely ignored in literature. In this paper, a novel model is proposed for pedestrian behavior modeling by including stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality classification, and abnormal event detection. To evaluate our model, a large pedestrian walking route dataset1 is built. The walking routes of 12, 684 pedestrians from a one-hour crowd surveillance video are manually annotated. It will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.",
"title": ""
},
{
"docid": "4f8a233a8de165f2aeafbad9c93a767a",
"text": "Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.",
"title": ""
},
{
"docid": "35dacb4b15e5c8fbd91cee6da807799a",
"text": "Stochastic gradient algorithms have been the main focus of large-scale learning problems and led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.",
"title": ""
},
{
"docid": "5b1c38fccbd591e6ab00a66ef636eb5d",
"text": "There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed ‘big data’. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mechanism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the existing approaches using Brewer’s CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.",
"title": ""
},
{
"docid": "68f0bdda44beba9203a785b8be1035bb",
"text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.",
"title": ""
},
{
"docid": "b2ad81e0c7e352dac4caea559ac675bb",
"text": "A linearly polarized miniaturized printed dipole antenna with novel half bowtie radiating arm is presented for wireless applications including the 2.4 GHz ISM band. This design is approximately 0.363 λ in length at central frequency of 2.97 GHz. An integrated balun with inductive transitions is employed for wideband impedance matching without changing the geometry of radiating arms. This half bowtie dipole antenna displays 47% bandwidth, and a simulated efficiency of over 90% with miniature size. The radiation patterns are largely omnidirectional and display a useful level of measured gain across the impedance bandwidth. The size and performance of the miniaturized half bowtie dipole antenna is compared with similar reduced size antennas with respect to their overall footprint, substrate dielectric constant, frequency of operation and impedance bandwidth. This half bowtie design in this communication outperforms the reference antennas in virtually all categories.",
"title": ""
},
{
"docid": "86a3a5f09181567c5b66d926b0f9d240",
"text": "Indigenous \"First Nations\" communities have consistently associated their disproportionate rates of psychiatric distress with historical experiences of European colonization. This emphasis on the socio-psychological legacy of colonization within tribal communities has occasioned increasingly widespread consideration of what has been termed historical trauma within First Nations contexts. In contrast to personal experiences of a traumatic nature, the concept of historical trauma calls attention to the complex, collective, cumulative, and intergenerational psychosocial impacts that resulted from the depredations of past colonial subjugation. One oft-cited exemplar of this subjugation--particularly in Canada--is the Indian residential school. Such schools were overtly designed to \"kill the Indian and save the man.\" This was institutionally achieved by sequestering First Nations children from family and community while forbidding participation in Native cultural practices in order to assimilate them into the lower strata of mainstream society. The case of a residential school \"survivor\" from an indigenous community treatment program on a Manitoba First Nations reserve is presented to illustrate the significance of participation in traditional cultural practices for therapeutic recovery from historical trauma. An indigenous rationale for the postulated efficacy of \"culture as treatment\" is explored with attention to plausible therapeutic mechanisms that might account for such recovery. To the degree that a return to indigenous tradition might benefit distressed First Nations clients, redressing the socio-psychological ravages of colonization in this manner seems a promising approach worthy of further research investigation.",
"title": ""
},
{
"docid": "ef925e9d448cf4ca9a889b5634b685cf",
"text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.",
"title": ""
},
{
"docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc",
"text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting",
"title": ""
}
] |
scidocsrr
|
80d7567d1d8943c76e6a979ffd1cfa0c
|
Real fuzzy PID control of the UAV AR.Drone 2.0 for hovering under disturbances in known environments
|
[
{
"docid": "7e884438ee8459a441cbe1500f1bac88",
"text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.",
"title": ""
},
{
"docid": "c12d534d219e3d249ba3da1c0956c540",
"text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.",
"title": ""
}
] |
[
{
"docid": "c78ef06693d0b8ae37989b5574938c90",
"text": "Relational databases have been around for many decades and are the database technology of choice for most traditional data-intensive storage and retrieval applications. Retrievals are usually accomplished using SQL, a declarative query language. Relational database systems are generally efficient unless the data contains many relationships requiring joins of large tables. Recently there has been much interest in data stores that do not use SQL exclusively, the so-called NoSQL movement. Examples are Google's BigTable and Facebook's Cassandra. This paper reports on a comparison of one such NoSQL graph database called Neo4j with a common relational database system, MySQL, for use as the underlying technology in the development of a software system to record and query data provenance information.",
"title": ""
},
{
"docid": "b2b4e5162b3d7d99a482f9b82820d59e",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37",
"text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.",
"title": ""
},
{
"docid": "c71ada1231703f2ecb2c2872ef7d5632",
"text": "We present a spatial multiplex optical transmission system named the “Smart Light” (See Figure 1), which provides multiple data streams to multiple points simultaneously. This system consists of a projector and some devices along with a photo-detector. The projector projects images with invisible information to the devices, and devices receive some data. In this system, the data stream is expandable to a positionbased audio or video stream by using DMDs (Digital Micro-mirror Device) or LEDs (Light Emitting Diode) with unperceivable space-time modulation. First, in a preliminary experiment, we confirmed with a commercially produced XGA grade projector transmitting a million points that the data rate of its path is a few bits per second. Detached devices can receive relative position data and other properties from the projector. Second, we made an LED type high-speed projector to transmit audio streams using modulated light on an object and confirmed the transmission of positionbased audio stream data.",
"title": ""
},
{
"docid": "b9c0ccebb8f7339830daccb235338d4a",
"text": "ÐA problem gaining interest in pattern recognition applied to data mining is that of selecting a small representative subset from a very large data set. In this article, a nonparametric data reduction scheme is suggested. It attempts to represent the density underlying the data. The algorithm selects representative points in a multiscale fashion which is novel from existing density-based approaches. The accuracy of representation by the condensed set is measured in terms of the error in density estimates of the original and reduced sets. Experimental studies on several real life data sets show that the multiscale approach is superior to several related condensation methods both in terms of condensation ratio and estimation error. The condensed set obtained was also experimentally shown to be effective for some important data mining tasks like classification, clustering, and rule generation on large data sets. Moreover, it is empirically found that the algorithm is efficient in terms of sample complexity. Index TermsÐData mining, multiscale condensation, scalability, density estimation, convergence in probability, instance learning.",
"title": ""
},
{
"docid": "888e8f68486c08ffe538c46ba76de85c",
"text": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.",
"title": ""
},
{
"docid": "b2d334cc7d79d2e3ebd573bbeaa2dfbe",
"text": "Objectives\nTo measure the occurrence and levels of depression, anxiety and stress in undergraduate dental students using the Depression, Anxiety and Stress Scale (DASS-21).\n\n\nMethods\nThis cross-sectional study was conducted in November and December of 2014. A total of 289 dental students were invited to participate, and 277 responded, resulting in a response rate of 96%. The final sample included 247 participants. Eligible participants were surveyed via a self-reported questionnaire that included the validated DASS-21 scale as the assessment tool and questions about demographic characteristics and methods for managing stress.\n\n\nResults\nAbnormal levels of depression, anxiety and stress were identified in 55.9%, 66.8% and 54.7% of the study participants, respectively. A multiple linear regression analysis revealed multiple predictors: gender (for anxiety b=-3.589, p=.016 and stress b=-4.099, p=.008), satisfaction with faculty relationships (for depression b=-2.318, p=.007; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), satisfaction with peer relationships (for depression b=-3.527, p<.001; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), and dentistry as the first choice for field of study (for stress b=-2.648, p=.045). The standardized coefficients demonstrated the relationship and strength of the predictors for each subscale. To cope with stress, students engaged in various activities such as reading, watching television and seeking emotional support from others.\n\n\nConclusions\nThe high occurrence of depression, anxiety and stress among dental students highlights the importance of providing support programs and implementing preventive measures to help students, particularly those who are most susceptible to higher levels of these psychological conditions.",
"title": ""
},
{
"docid": "8cd52cdc44c18214c471716745e3c00f",
"text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.",
"title": ""
},
{
"docid": "9df5329fcf5e5dd6394f76040d8d8402",
"text": "Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets.",
"title": ""
},
{
"docid": "962ab9e871dc06c3cd290787dc7e71aa",
"text": "The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.",
"title": ""
},
{
"docid": "7208a2b257c7ba7122fd2e278dd1bf4a",
"text": "Abstract—This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.",
"title": ""
},
{
"docid": "02fd763f6e15b07187e3cbe0fd3d0e18",
"text": "The Batcher`s bitonic sorting algorithm is a parallel sorting algorithm, which is used for sorting the numbers in modern parallel machines. There are various parallel sorting algorithms such as radix sort, bitonic sort, etc. It is one of the efficient parallel sorting algorithm because of load balancing property. It is widely used in various scientific and engineering applications. However, Various researches have worked on a bitonic sorting algorithm in order to improve up the performance of original batcher`s bitonic sorting algorithm. In this paper, tried to review the contribution made by these researchers.",
"title": ""
},
{
"docid": "1203f22bfdfc9ecd211dbd79a2043a6a",
"text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.",
"title": ""
},
{
"docid": "4a6c7b68ea23f910f0edc35f4542e5cb",
"text": "Microgrids have been proposed in order to handle the impacts of Distributed Generators (DGs) and make conventional grids suitable for large scale deployments of distributed generation. However, the introduction of microgrids brings some challenges. Protection of a microgrid and its entities is one of them. Due to the existence of generators at all levels of the distribution system and two distinct operating modes, i.e. Grid Connected and Islanded modes, the fault currents in a system vary substantially. Consequently, the traditional fixed current relay protection schemes need to be improved. This paper presents a conceptual design of a microgrid protection system which utilizes extensive communication to monitor the microgrid and update relay fault currents according to the variations in the system. The proposed system is designed so that it can respond to dynamic changes in the system such as connection/disconnection of DGs.",
"title": ""
},
{
"docid": "9afdd51ba034e9580c52f0aba50dfa4b",
"text": "Advances in field programmable gate arrays (FPGAs), which are the platform of choice for reconfigurable computing, have made it possible to use FPGAs in increasingly ma ny areas of computing, including complex scientific applicati ons. These applications demand high performance and high-preci s on, floating-point arithmetic. Until now, most of the research has not focussed on compliance with IEEE standard 754, focusing ins tead upon custom formats and bitwidths. In this paper, we present double-precision floating-point cores that are parameteri zed by their degree of pipelining and the features of IEEE standard754 that they implement. We then analyze the effects of supporti ng the standard when these cores are used in an FPGA-based accelerator for Lennard-Jones force and potential calculations that are part of molecular dynamics (MD) simulations.",
"title": ""
},
{
"docid": "2431ee8fb0dcfd84c61e60ee41a95edb",
"text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.",
"title": ""
},
{
"docid": "cc6895789b42f7ae779c2236cde4636a",
"text": "Modern day social media search and recommender systems require complex query formulation that incorporates both user context and their explicit search queries. Users expect these systems to be fast and provide relevant results to their query and context. With millions of documents to choose from, these systems utilize a multi-pass scoring function to narrow the results and provide the most relevant ones to users. Candidate selection is required to sift through all the documents in the index and select a relevant few to be ranked by subsequent scoring functions. It becomes crucial to narrow down the document set while maintaining relevant ones in resulting set. In this tutorial we survey various candidate selection techniques and deep dive into case studies on a large scale social media platform. In the later half we provide hands-on tutorial where we explore building these candidate selection models on a real world dataset and see how to balance the tradeoff between relevance and latency.",
"title": ""
},
{
"docid": "18b0f6712396476dc4171128ff08a355",
"text": "Heterogeneous multicore architectures have the potential for high performance and energy efficiency. These architectures may be composed of small power-efficient cores, large high-performance cores, and/or specialized cores that accelerate the performance of a particular class of computation. Architects have explored multiple dimensions of heterogeneity, both in terms of micro-architecture and specialization. While early work constrained the cores to share a single ISA, this work shows that allowing heterogeneous ISAs further extends the effectiveness of such architectures\n This work exploits the diversity offered by three modern ISAs: Thumb, x86-64, and Alpha. This architecture has the potential to outperform the best single-ISA heterogeneous architecture by as much as 21%, with 23% energy savings and a reduction of 32% in Energy Delay Product.",
"title": ""
},
{
"docid": "033b05d21f5b8fb5ce05db33f1cedcde",
"text": "Seasonal occurrence of the common cutworm Spodoptera litura (Fab.) (Lepidoptera: Noctuidae) moths captured in synthetic sex pheromone traps and associated field population of eggs and larvae in soybean were examined in India from 2009 to 2011. Male moths of S. litura first appeared in late July or early August and continued through October. Peak male trap catches occurred during the second fortnight of September, which was within soybean reproductive stages. Similarly, the first appearance of S. litura egg masses and larval populations were observed after the first appearance of male moths in early to mid-August, and were present in the growing season up to late September to mid-October. The peak appearance of egg masses and larval populations always corresponded with the peak activity of male moths recorded during mid-September in all years. Correlation studies showed that weekly mean trap catches were linearly and positively correlated with egg masses and larval populations during the entire growing season of soybean. Seasonal means of male moth catches in pheromone traps during the 2010 and 2011 seasons were significantly lower than the catches during the 2009 season. However, seasonal means of the egg masses and larval populations were not significantly different between years. Pheromone traps may be useful indicators of the onset of numbers of S. litura eggs and larvae in soybean fields.",
"title": ""
},
{
"docid": "20c6da8e705ba063d139d4adba7bcde2",
"text": "Copyright © 2010 American Heart Association. All rights reserved. Print ISSN: 0009-7322. Online 72514 Circulation is published by the American Heart Association. 7272 Greenville Avenue, Dallas, TX DOI: 10.1161/CIR.0b013e3181f9a223 published online Oct 11, 2010; Circulation Care, Perioperative and Resuscitation Critical Association Council on Clinical Cardiology and Council on Cardiopulmonary, Parshall, Gary S. Francis, Mihai Gheorghiade and on behalf of the American Heart Anderson, Cynthia Arslanian-Engoren, W. Brian Gibler, James K. McCord, Mark B. Neal L. Weintraub, Sean P. Collins, Peter S. Pang, Phillip D. Levy, Allen S. Statement From the American Heart Association Treatment, and Disposition: Current Approaches and Future Aims. A Scientific Acute Heart Failure Syndromes: Emergency Department Presentation, http://circ.ahajournals.org located on the World Wide Web at: The online version of this article, along with updated information and services, is",
"title": ""
}
] |
scidocsrr
|
89ec42167ac8e1243fca82dc5a7df1ae
|
RGBD-camera based get-up event detection for hospital fall prevention
|
[
{
"docid": "b9a893fb526955b5131860a1402e2f7c",
"text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"title": ""
}
] |
[
{
"docid": "d90954eaae0c9d84e261c6d0794bbf76",
"text": "The index case of the Ebola virus disease epidemic in West Africa is believed to have originated in Guinea. By June 2014, Guinea, Liberia, and Sierra Leone were in the midst of a full-blown and complex global health emergency. The devastating effects of this Ebola epidemic in West Africa put the global health response in acute focus for urgent international interventions. Accordingly, in October 2014, a World Health Organization high-level meeting endorsed the concept of a phase 2/3 clinical trial in Liberia to study Ebola vaccines. As a follow-up to the global response, in November 2014, the Government of Liberia and the US Government signed an agreement to form a research partnership to investigate Ebola and to assess intervention strategies for treating, controlling, and preventing the disease in Liberia. This agreement led to the establishment of the Joint Liberia-US Partnership for Research on Ebola Virus in Liberia as the beginning of a long-term collaborative partnership in clinical research between the two countries. In this article, we discuss the methodology and related challenges associated with the implementation of the Ebola vaccines clinical trial, based on a double-blinded randomized controlled trial, in Liberia.",
"title": ""
},
{
"docid": "3f8ed9f5b015f50989ebde22329e6e7c",
"text": "In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.",
"title": ""
},
{
"docid": "af598c452d9a6589e45abe702c7cab58",
"text": "This paper proposes the concept of “liveaction virtual reality games” as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions. Live-action virtual reality games are “live-action games” because a player physically acts out (using his/her real body and senses) his/her “avatar” (his/her virtual representation) in the game stage – the mixed-reality environment where the game happens. The game stage is a kind of “augmented virtuality” – a mixedreality where the virtual world is augmented with real-world information. In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information. Physical objects that reside in the physical world are also mapped to virtual elements. Liveaction virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities. This setup enables the players to touch physical architectural elements (such as walls) and other objects, “feeling” the game stage. Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly. Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.",
"title": ""
},
{
"docid": "c1bfef951e9775f6ffc949c5110e1bd1",
"text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.",
"title": ""
},
{
"docid": "80c7a60035f08fcefc6f5e0ba1c82405",
"text": "This paper deals with word length in twenty of Jane Austen's letters and is part of a research project performed in Göttingen. Word length in English has so far only been studied in the context of contemporary texts (Hasse & Weinbrenner, 1995; Riedemann, 1994) and in the English dictionary (Rothschild, 1986). It has been ascertained that word length in texts abides by a law having the form of the mixed Poisson distribution -an assumption which in a language like English can easily be justified. However, in special texts other regularities can arise. Individual or genre-like factors can induce a systematic deviation in one or more frequency classes. We say that the phenomenon is on the way to another attractor. The first remedy in such cases is a local modification of the given frequency classes; the last remedy is the search for another model. THE DATA Letters were examined because it can be assumed that they are written down without interruption, and hence revised versions or the conscious use of stylistic means are the exception. The assumed natural rhythm governing word length in writing is thus believed to have remained mostly uninfluenced and constant. The length of the selected letters is between 126 and 494 words. They date from 1796 to 1817 and are partly businesslike and partly private. The letters to Jane Austen's sister Cassandra above all are written in an 'informal' style. In general, however, the letters are on a high stylistic level, which is not only characteristic of the use of language at that time, but also a main feature of Jane Austen's personal style. Thus contractions such as don't, can't, wouldn 't etc. do not occur. word depends on the number of vowels or diphthongs. Diphthongs and triphthongs can also be differentiated, both of these would count as one syllable. This paper only deals with diphthongs. The number of syllables of abbreviations is counted according to its fully spoken form. Thus addresses and titles such as 'Mrs', 'Mr', 'Md' and 'Capt' consist of two syllables; 'Lieut' consists of three syllables. The same holds for figures and for the abbreviations of months. MS is the common short form for 'Manuscript'; 'comps' (complements), 'G.Mama' (Grandmama), 'morn' (morning), 'c ' (could), 'w ' (would) or 'rec' (received) seem to be the writer's idiosyncratic abbreviations. In all cases length is determined by the spoken form. The analysis is based on the 'received pronunciation' of British English. Only the running text without address, date, or place has been considered. ANALYSING THE DATA General Criteria Length is determined by the number of syllables in each word. \"Word\" is defined as an orthographic unit. The number of syllables in a Findings As ascertained by the software tool 'AltmannFitter' (1994) the best model was found to be the positive Singh-Poisson distribution (= inflated zero truncated Poisson distribution), which has the following formula: *Address correspondence to: J. Frischen, Brüder-Grimm-Allee 2, 37075 Göttingen, Germany. D ow nl oa de d by [ K or ea U ni ve rs ity ] at 0 4: 53 1 0 Ja nu ar y 20 15 WORD LENGTH JANE AUSTEN'S LETTERS 81 aae' Table 3. Letter 16, Austen, 1798, to Cassandra Austen. fx NPx aae~ x\\(l-e-)' x=2,3,... Distributions modified in this way indicate that the author tends to leave the basic model (in the case of English, the Poisson distribution) by local modification of the shortest class (here x 188 57 15 4 1 187.79 56.53 16.38 3.561 0.74J",
"title": ""
},
{
"docid": "34f6603912c9775fc48329e596467107",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "91d0f12e9303b93521146d4d650a63df",
"text": "We utilize the state-of-the-art in deep learning to show that we can learn by example what constitutes humor in the context of a Yelp review. To the best of the authors knowledge, no systematic study of deep learning for humor exists – thus, we construct a scaffolded study. First, we use “shallow” methods such as Random Forests and Linear Discriminants built on top of bag-of-words and word vector features. Then, we build deep feedforward networks on top of these features – in some sense, measuring how much of an effect basic feedforward nets help. Then, we use recurrent neural networks and convolutional neural networks to more accurately model the sequential nature of a review.",
"title": ""
},
{
"docid": "402bf66ab180944e8f3068bef64fbc77",
"text": "EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.",
"title": ""
},
{
"docid": "0c67bd1867014053a5bec3869f3b4f8c",
"text": "BACKGROUND AND PURPOSE\nConstraint-induced movement therapy (CI therapy) has previously been shown to produce large improvements in actual amount of use of a more affected upper extremity in the \"real-world\" environment in patients with chronic stroke (ie, >1 year after the event). This work was carried out in an American laboratory. Our aim was to determine whether these results could be replicated in another laboratory located in Germany, operating within the context of a healthcare system in which administration of conventional types of physical therapy is generally more extensive than in the United States.\n\n\nMETHODS\nFifteen chronic stroke patients were given CI therapy, involving restriction of movement of the intact upper extremity by placing it in a sling for 90% of waking hours for 12 days and training (by shaping) of the more affected extremity for 7 hours on the 8 weekdays during that period.\n\n\nRESULTS\nPatients showed a significant and very large degree of improvement from before to after treatment on a laboratory motor test and on a test assessing amount of use of the affected extremity in activities of daily living in the life setting (effect sizes, 0.9 and 2.2, respectively), with no decrement in performance at 6-month follow-up. During a pretreatment control test-retest interval, there were no significant changes on these tests.\n\n\nCONCLUSIONS\nResults replicate in Germany the findings with CI therapy in an American laboratory, suggesting that the intervention has general applicability.",
"title": ""
},
{
"docid": "077162116799dffe986cb488dda2ee56",
"text": "We present hybrid concolic testing, an algorithm that interleaves random testing with concolic execution to obtain both a deep and a wide exploration of program state space. Our algorithm generates test inputs automatically by interleaving random testing until saturation with bounded exhaustive symbolic exploration of program points. It thus combines the ability of random search to reach deep program states quickly together with the ability of concolic testing to explore states in a neighborhood exhaustively. We have implemented our algorithm on top of CUTE and applied it to obtain better branch coverage for an editor implementation (VIM 5.7, 150K lines of code) as well as a data structure implementation in C. Our experiments suggest that hybrid concolic testing can handle large programs and provide, for the same testing budget, almost 4× the branch coverage than random testing and almost 2× that of concolic testing.",
"title": ""
},
{
"docid": "01e53610e746555afadfc9387a66ce05",
"text": "This paper presents a survey of the autopilot systems for small or micro unmanned aerial vehicles (UAVs). The objective is to provide a summary of the current commercial, open source and research autopilot systems for convenience of potential small UAV users. The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both the hardware and software viewpoints. Several typical off-the-shelf autopilot packages are compared in terms of sensor packages, observation approaches and controller strengths. Afterwards some open source autopilot systems are introduced. Conclusion is made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "d7e7cdc9ac55d5af199395becfe02d73",
"text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.",
"title": ""
},
{
"docid": "8f570416ceecf87310b7780ec935d814",
"text": "BACKGROUND\nInguinal lymph node involvement is an important prognostic factor in penile cancer. Inguinal lymph node dissection allows staging and treatment of inguinal nodal disease. However, it causes morbidity and is associated with complications, such as lymphocele, skin loss and infection. Video Endoscopic Inguinal Lymphadenectomy (VEIL) is an endoscopic procedure, and it seems to be a new and attractive approach duplicating the standard open procedure with less morbidity. We present here a critical perioperative assessment with points of technique.\n\n\nMETHODS\nTen patients with moderate to high grade penile carcinoma with clinically negative inguinal lymph nodes were subjected to elective VEIL. VEIL was done in standard surgical steps. Perioperative parameters were assessed that is - duration of the surgery, lymph-related complications, time until drain removal, lymph node yield, surgical emphysema and histopathological positivity of lymph nodes.\n\n\nRESULTS\nOperative time for VEIL was 120 to 180 minutes. Lymph node yield was 7 to 12 lymph nodes. No skin related complications were seen with VEIL. Lymph related complications, that is, lymphocele, were seen in only two patients. The suction drain was removed after four to eight days (mean 5.1). Overall morbidity was 20% with VEIL.\n\n\nCONCLUSION\nIn our early experience, VEIL was a safe and feasible technique in patients with penile carcinoma with non palpable inguinal lymph nodes. It allows the removal of inguinal lymph nodes within the same limits as in conventional surgical dissection and potentially reduces surgical morbidity.",
"title": ""
},
{
"docid": "72ddcb7a55918a328576a811a89d245b",
"text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.",
"title": ""
},
{
"docid": "9b0ed9c60666c36f8cf33631f791687d",
"text": "The central notion of Role-Based Access Control (RBAC) is that users do not have discretionary access to enterprise objects. Instead, access permissions are administratively associated with roles, and users are administratively made members of appropriate roles. This idea greatly simplifies management of authorization while providing an opportunity for great flexibility in specifying and enforcing enterprisespecific protection policies. Users can be made members of roles as determined by their responsibilities and qualifications and can be easily reassigned from one role to another without modifying the underlying access structure. Roles can be granted new permissions as new applications and actions are incorporated, and permissions can be revoked from roles as needed. Some users and vendors have recognized the potential benefits of RBAC without a precise definition of what RBAC constitutes. Some RBAC features have been implemented in commercial products without a frame of reference as to the functional makeup and virtues of RBAC [1]. This lack of definition makes it difficult for consumers to compare products and for vendors to get credit for the effectiveness of their products in addressing known security problems. To correct these deficiencies, a number of government sponsored research efforts are underway to define RBAC precisely in terms of its features and the benefits it affords. This research includes: surveys to better understand the security needs of commercial and government users [2], the development of a formal RBAC model, architecture, prototype, and demonstrations to validate its use and feasibility. As a result of these efforts, RBAC systems are now beginning to emerge. The purpose of this paper is to provide additional insight as to the motivations and functionality that might go behind the official RBAC name.",
"title": ""
},
{
"docid": "644f61bc267d3dcb915f8c36c1584605",
"text": "This paper discusses the design and development of an experimental tabletop robot called \"Haru\" based on design thinking methodology. Right from the very beginning of the design process, we have brought an interdisciplinary team that includes animators, performers and sketch artists to help create the first iteration of a distinctive anthropomorphic robot design based on a concept that leverages form factor with functionality. Its unassuming physical affordance is intended to keep human expectation grounded while its actual interactive potential stokes human interest. The meticulous combination of both subtle and pronounced mechanical movements together with its stunning visual displays, highlight its affective affordance. As a result, we have developed the first iteration of our tabletop robot rich in affective potential for use in different research fields involving long-term human-robot interaction.",
"title": ""
},
{
"docid": "86820c43e63066930120fa5725b5b56d",
"text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.",
"title": ""
}
] |
scidocsrr
|
266bd9346ae3016067c36dcb68031cca
|
Image encryption using chaotic logistic map
|
[
{
"docid": "fc9eae18a5a44ee7df22d6c7bdb5a164",
"text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.",
"title": ""
}
] |
[
{
"docid": "d8a68a9e769f137e06ab05e4d4075dce",
"text": "The inelastic response of existing reinforced concrete (RC) buildings without seismic details is investigated, presenting the results from more than 1000 nonlinear analyses. The seismic performance is investigated for two buildings, a typical building form of the 60s and a typical form of the 80s. Both structures are designed according to the old Greek codes. These building forms are typical for that period for many Southern European countries. Buildings of the 60s do not have seismic details, while buildings of the 80s have elementary seismic details. The influence of masonry infill walls is also investigated for the building of the 60s. Static pushover and incremental dynamic analyses (IDA) for a set of 15 strong motion records are carried out for the three buildings, two bare and one infilled. The IDA predictions are compared with the results of pushover analysis and the seismic demand according to Capacity Spectrum Method (CSM) and N2 Method. The results from IDA show large dispersion on the response, available ductility capacity, behaviour factor and failure displacement, depending on the strong motion record. CSM and N2 predictions are enveloped by the nonlinear dynamic predictions, but have significant differences from the mean values. The better behaviour of the building of the 80s compared to buildings of the 60s is validated with both pushover and nonlinear dynamic analyses. Finally, both types of analysis show that fully infilled frames exhibit an improved behaviour compared to bare frames.",
"title": ""
},
{
"docid": "9150005965c893e6c2efa15c469fdffb",
"text": "Low power has emerged as a principal theme in today's electronics industry. The need for low power has caused a major paradigm shift in which power dissipation is as important as performance and area. This article presents an in-depth survey of CAD methodologies and techniques for designing low power digital CMOS circuits and systems and describes the many issues facing designers at architectural, logical, and physical levels of design abstraction. It reviews some of the techniques and tools that have been proposed to overcome these difficulties and outlines the future challenges that must be met to design low power, high performance systems.",
"title": ""
},
{
"docid": "6558b2a3c43e11d58f3bb829425d6a8d",
"text": "While end-to-end neural conversation models have led to promising advances in reducing hand-crafted features and errors induced by the traditional complex system architecture, they typically require an enormous amount of data due to the lack of modularity. Previous studies adopted a hybrid approach with knowledge-based components either to abstract out domainspecific information or to augment data to cover more diverse patterns. On the contrary, we propose to directly address the problem using recent developments in the space of continual learning for neural models. Specifically, we adopt a domainindependent neural conversational model and introduce a novel neural continual learning algorithm that allows a conversational agent to accumulate skills across different tasks in a data-efficient way. To the best of our knowledge, this is the first work that applies continual learning to conversation systems. We verified the efficacy of our method through a conversational skill transfer from either synthetic dialogs or human-human dialogs to human-computer conversations in a customer support domain.",
"title": ""
},
{
"docid": "435200b067ebd77f69a04cc490d73fa6",
"text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.",
"title": ""
},
{
"docid": "c2891abf8297b5dcf0e21dfa9779a017",
"text": "The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large \"knowledge repositories\" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed.\n In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study.",
"title": ""
},
{
"docid": "4d4de3ff3c99779c7fd5bd60fc006189",
"text": "With the fast growing information technologies, high efficiency AC-DC front-end power supplies are becoming more and more desired in all kinds of distributed power system applications due to the energy conservation consideration. For the power factor correction (PFC) stage, the conventional constant frequency average current mode control has very low efficiency at light load due to high switching frequency related loss. The constant on-time control for PFC features the automatic reduction of switching frequency at light load, resulting improved light load efficiency. However, lower heavy load efficiency of the constant on-time control is observed because of very high frequency at Continuous Conduction Mode (CCM). By carefully comparing the on-time and frequency profiles between constant on-time and constant frequency control, a novel adaptive on-time control is proposed to improve the light load efficiency without sacrificing the heavy load efficiency. The performance of the adaptive on-time control is verified by experiment.",
"title": ""
},
{
"docid": "aba4e6baa69a2ca7d029ebc33931fd4d",
"text": "Along with the improvement of radar technologies Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) has come to be an active research area. SAR/ISAR are radar techniques to generate a two-dimensional high-resolution image of a target. Unlike other similar experiments using Convolutional Neural Networks (CNN) to solve this problem, we utilize an unusual approach that leads to better performance and faster training times. Our CNN uses complex values generated by a simulation to train the network; additionally, we utilize a multi-radar approach to increase the accuracy of the training and testing processes, thus resulting in higher accuracies than the other papers working on SAR/ISAR ATR. We generated our dataset with 7 different aircraft models with a radar simulator we developed called RadarPixel; it is a Windows GUI program implemented using Matlab and Java programing, the simulator is capable of accurately replicating a real SAR/ISAR configurations. Our objective is utilize our multiradar technique and determine the optimal number of radars needed to detect and classify targets.",
"title": ""
},
{
"docid": "74f8127bc620fa1c9797d43dedea4d45",
"text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.",
"title": ""
},
{
"docid": "63ed24b818f83ab04160b5c690075aac",
"text": "In this paper, we discuss the impact of digital control in high-frequency switched-mode power supplies (SMPS), including point-of-load and isolated DC-DC converters, microprocessor power supplies, power-factor-correction rectifiers, electronic ballasts, etc., where switching frequencies are typically in the hundreds of kHz to MHz range, and where high efficiency, static and dynamic regulation, low size and weight, as well as low controller complexity and cost are very important. To meet these application requirements, a digital SMPS controller may include fast, small analog-to-digital converters, hardware-accelerated programmable compensators, programmable digital modulators with very fine time resolution, and a standard microcontroller core to perform programming, monitoring and other system interface tasks. Based on recent advances in circuit and control techniques, together with rapid advances in digital VLSI technology, we conclude that high-performance digital controller solutions are both feasible and practical, leading to much enhanced system integration and performance gains. Examples of experimentally demonstrated results are presented, together with pointers to areas of current and future research and development.",
"title": ""
},
{
"docid": "08f49b003a3a5323e38e4423ba6503a4",
"text": "Neurofeedback (NF), a type of neurobehavioral training, has gained increasing attention in recent years, especially concerning the treatment of children with ADHD. Promising results have emerged from recent randomized controlled studies, and thus, NF is on its way to becoming a valuable addition to the multimodal treatment of ADHD. In this review, we summarize the randomized controlled trials in children with ADHD that have been published within the last 5 years and discuss issues such as the efficacy and specificity of effects, treatment fidelity and problems inherent in placebo-controlled trials of NF. Directions for future NF research are outlined, which should further address specificity and help to determine moderators and mediators to optimize and individualize NF training. Furthermore, we describe methodological (tomographic NF) and technical ('tele-NF') developments that may also contribute to further improvements in treatment outcome.",
"title": ""
},
{
"docid": "6ea4ecb12ca077c07f4706b6d11130db",
"text": "We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions. In particular, we study the number of linear regions, i.e. pieces, that a PWL function represented by a DNN can attain, both theoretically and empirically. We present (i) tighter upper and lower bounds for the maximum number of linear regions on rectifier networks, which are exact for inputs of dimension one; (ii) a first upper bound for multi-layer maxout networks; and (iii) a first method to perform exact enumeration or counting of the number of regions by modeling the DNN with a mixed-integer linear formulation. These bounds come from leveraging the dimension of the space defining each linear region. The results also indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.",
"title": ""
},
{
"docid": "cc2e24cd04212647f1c29482aa12910d",
"text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.",
"title": ""
},
{
"docid": "7b1a6768cc6bb975925a754343dc093c",
"text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.",
"title": ""
},
{
"docid": "53ebcdf1dfb5b850228ac422fdd50490",
"text": "A frequent goal of flow cytometric analysis is to classify cells as positive or negative for a given marker, or to determine the precise ratio of positive to negative cells. This requires good and reproducible instrument setup, and careful use of controls for analyzing and interpreting the data. The type of controls to include in various kinds of flow cytometry experiments is a matter of some debate and discussion. In this tutorial, we classify controls in various categories, describe the options within each category, and discuss the merits of each option.",
"title": ""
},
{
"docid": "e28f2a2d5f3a0729943dca52da5d45b6",
"text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odometry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframebased, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Furthermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s.",
"title": ""
},
{
"docid": "dbcdcd2cdf8894f853339b5fef876dde",
"text": "Genicular nerve radiofrequency ablation (RFA) has recently gained popularity as an intervention for chronic knee pain in patients who have failed other conservative or surgical treatments. Long-term efficacy and adverse events are still largely unknown. Under fluoroscopic guidance, thermal RFA targets the lateral superior, medial superior, and medial inferior genicular nerves, which run in close proximity to the genicular arteries that play a crucial role in supplying the distal femur, knee joint, meniscus, and patella. RFA targets nerves by relying on bony landmarks, but fails to provide visualization of vascular structures. Although vascular injuries after genicular nerve RFA have not been reported, genicular vascular complications are well documented in the surgical literature. This article describes the anatomy, including detailed cadaveric dissections and schematic drawings, of the genicular neurovascular bundle. The present investigation also included a comprehensive literature review of genicular vascular injuries involving those arteries which lie near the targets of genicular nerve RFA. These adverse vascular events are documented in the literature as case reports. Of the 27 cases analyzed, 25.9% (7/27) involved the lateral superior genicular artery, 40.7% (11/27) involved the medial superior genicular artery, and 33.3% (9/27) involved the medial inferior genicular artery. Most often, these vascular injuries result in the formation of pseudoaneurysm, arteriovenous fistula (AVF), hemarthrosis, and/or osteonecrosis of the patella. Although rare, these complications carry significant morbidities. Based on the detailed dissections and review of the literature, our investigation suggests that vascular injury is a possible risk of genicular RFA. Lastly, recommendations are offered to minimize potential iatrogenic complications.",
"title": ""
},
{
"docid": "c9d95b3656c703f4ce49c591a3f0a00f",
"text": "Due to cellular heterogeneity, cell nuclei classification, segmentation, and detection from pathological images are challenging tasks. In the last few years, Deep Convolutional Neural Networks (DCNN) approaches have been shown state-of-the-art (SOTA) performance on histopathological imaging in different studies. In this work, we have proposed different advanced DCNN models and evaluated for nuclei classification, segmentation, and detection. First, the Densely Connected Recurrent Convolutional Network (DCRN) model is used for nuclei classification. Second, Recurrent Residual U-Net (R2U-Net) is applied for nuclei segmentation. Third, the R2U-Net regression model which is named UD-Net is used for nuclei detection from pathological images. The experiments are conducted with different datasets including Routine Colon Cancer(RCC) classification and detection dataset, and Nuclei Segmentation Challenge 2018 dataset. The experimental results show that the proposed DCNN models provide superior performance compared to the existing approaches for nuclei classification, segmentation, and detection tasks. The results are evaluated with different performance metrics including precision, recall, Dice Coefficient (DC), Means Squared Errors (MSE), F1-score, and overall accuracy. We have achieved around 3.4% and 4.5% better F-1 score for nuclei classification and detection tasks compared to recently published DCNN based method. In addition, R2U-Net shows around 92.15% testing accuracy in term of DC. These improved methods will help for pathological practices for better quantitative analysis of nuclei in Whole Slide Images(WSI) which ultimately will help for better understanding of different types of cancer in clinical workflow.",
"title": ""
},
{
"docid": "165fbade7d495ce47a379520697f0d75",
"text": "Neutral-point-clamped (NPC) inverters are the most widely used topology of multilevel inverters in high-power applications (several megawatts). This paper presents in a very simple way the basic operation and the most used modulation and control techniques developed to date. Special attention is paid to the loss distribution in semiconductors, and an active NPC inverter is presented to overcome this problem. This paper discusses the main fields of application and presents some technological problems such as capacitor balance and losses.",
"title": ""
}
] |
scidocsrr
|
d5d8cb033291263ffeb48f31e72cde1b
|
Rekindling network protocol innovation with user-level stacks
|
[
{
"docid": "f9c938a98621f901c404d69a402647c7",
"text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.",
"title": ""
}
] |
[
{
"docid": "bf5f08174c55ed69e454a87ff7fbe6e2",
"text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "bb201a87b4f81c9c4d2c8889d4bd3a6a",
"text": "Computers have difficulty learning how to play Texas Hold’em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to mis-represent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold’em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold’em Poker agents.",
"title": ""
},
{
"docid": "cf1d8589fb42bd2af21e488e3ea79765",
"text": "This paper presents ProRace, a dynamic data race detector practical for production runs. It is lightweight, but still offers high race detection capability. To track memory accesses, ProRace leverages instruction sampling using the performance monitoring unit (PMU) in commodity processors. Our PMU driver enables ProRace to sample more memory accesses at a lower cost compared to the state-of-the-art Linux driver. Moreover, ProRace uses PMU-provided execution contexts including register states and program path, and reconstructs unsampled memory accesses offline. This technique allows \\ProRace to overcome inherent limitations of sampling and improve the detection coverage by performing data race detection on the trace with not only sampled but also reconstructed memory accesses. Experiments using racy production software including apache and mysql shows that, with a reasonable offline cost, ProRace incurs only 2.6% overhead at runtime with 27.5% detection probability with a sampling period of 10,000.",
"title": ""
},
{
"docid": "86e4fa3a9cc7dd6298785f40dae556b6",
"text": "Stochastic block model (SBM) and its variants are popular models used in community detection for network data. In this paper, we propose a feature adjusted stochastic block model (FASBM) to capture the impact of node features on the network links as well as to detect the residual community structure beyond that explained by the node features. The proposed model can accommodate multiple node features and estimate the form of feature impacts from the data. Moreover, unlike many existing algorithms that are limited to binary-valued interactions, the proposed FASBM model and inference approaches are easily applied to relational data that generates from any exponential family distribution. We illustrate the methods on simulated networks and on two real world networks: a brain network and an US air-transportation network.",
"title": ""
},
{
"docid": "49a6de5759f4e760f68939e9292928d8",
"text": "An ongoing controversy exists in the prototyping community about how closely in form and function a user-interface prototype should represent the final product. This dispute is referred to as the \" Low-versus High-Fidelity Prototyping Debate.'' In this article, we discuss arguments for and against low-and high-fidelity prototypes , guidelines for the use of rapid user-interface proto-typing, and the implications for user-interface designers.",
"title": ""
},
{
"docid": "d44bc13e5dd794a70211aac7ba44103b",
"text": "Endowing artificial agents with the ability to empathize is believed to enhance their social behavior and to make them more likable, trustworthy, and caring. Neuropsychological findings substantiate that empathy occurs to different degrees depending on several factors including, among others, a person’s mood, personality, and social relationships with others. Although there is increasing interest in endowing artificial agents with affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such factors in influencing their empathic behavior. In this paper, we present a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. The presented model is based on psychological models of empathy and is applied and evaluated in the context of a conversational agent scenario.",
"title": ""
},
{
"docid": "dc330168eb4ca331c8fbfa40b6abdd66",
"text": "For multimedia communications, the low computational complexity of coder is required to integrate services of several media sources due to the limited computing capability of the personal information machine. The Multi-pulse Maximum Likelihood Quantization (MP-MLQ) algorithm with high computational complexity and high quality has been used in the G.723.1 standard codec. To reduce the computational complexity of the MP-MLQ method, this paper presents an efficient pre-selection scheme to simplify the excitation codebook search procedure which is computationally the most demand-ing. We propose a fast search algorithm which uses an energy function to predict the candidate pulses, and the codebook is redesigned to become the multi-track position structure. Simulation results show that the average of the perceptual evaluation of speech quality (PESQ) is degraded slightly, by only 0.056, and our proposed method can reduce computational complexity by about 52.8% relative to the original G.723.1 MP-MLQ computation load with perceptually negligible degradation. Our objective evaluations verify that the proposed method can provide speech quality comparable to that of the original MP-MLQ approach.",
"title": ""
},
{
"docid": "ccddd7df2b5246c44d349bfb0aae499a",
"text": "We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms’ rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.",
"title": ""
},
{
"docid": "2a3273a7308273887b49f2d6cc99fe68",
"text": "The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not \";mined\"; to discover hidden information for effective decision making. Discovery of hidden patterns and relationships often goes unexploited. Advanced data mining techniques can help remedy this situation. This research has developed a prototype Intelligent Heart Disease Prediction System (IHDPS) using data mining techniques, namely, Decision Trees, Naive Bayes and Neural Network. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. IHDPS can answer complex \";what if\"; queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established. IHDPS is Web-based, user-friendly, scalable, reliable and expandable. It is implemented on the .NET platform.",
"title": ""
},
{
"docid": "3810acca479f6fa5d4f314d36a27b42c",
"text": "The paper describes a stabilization control of two wheels driven wheelchair based on pitch angle disturbance observer (PADO). PADO makes it possible to stabilize the wheelchair motion and remove casters. This brings a sophisticated mobility of wheelchair because the casters are obstacle to realize step passage motion and so on. The proposed approach based on PADO is robust against disturbance of pitch angle direction and the more functional wheelchairs is expected in the developed system. The validity of the proposed method is confirmed by simulation and experiment.",
"title": ""
},
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
},
{
"docid": "ec9c0ba115e68545e263a82d6282d43e",
"text": "A 1.8 GHz LC VCO in 1.8-V supply is presented. The VCO achieves low power consumption by optimum selection of inductance in the L-C tank. To increase the tuning range, a three-bit switching capacitor array is used for digital switched tuning. Designed in 0.18μm RF CMOS technology, the proposed VCO achieves a phase noise of -126.2dBc/Hz at 1MHz offset and consumes 1.38mA core current at 1.8-V voltage supply.",
"title": ""
},
{
"docid": "2172e78731ee63be5c15549e38c4babb",
"text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "d8f54e45818fd88fc8e5689de55428a3",
"text": "When brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: The changes become extremely difficult to notice, even when they are large, presented repeatedly, and the observer expects them to occur (Rensink, O’Regan, & Clark, 1997). To determine the mechanisms behind this induced “change blindness”, four experiments examine its dependence on initial preview and on the nature of the interruptions used. Results support the proposal that representations at the early stages of visual processing are inherently volatile, and that focused attention is needed to stabilize them sufficiently to support the perception of change.",
"title": ""
},
{
"docid": "c30ea570f744f576014aeacf545b027c",
"text": "We aimed to examine the effect of different doses of lutein supplementation on visual function in subjects with long-term computer display light exposure. Thirty-seven healthy subjects with long-term computer display light exposure ranging in age from 22 to 30 years were randomly assigned to one of three groups: Group L6 (6 mg lutein/d, n 12); Group L12 (12 mg lutein/d, n 13); and Group Placebo (maltodextrin placebo, n 12). Levels of serum lutein and visual performance indices such as visual acuity, contrast sensitivity and glare sensitivity were measured at weeks 0 and 12. After 12-week lutein supplementation, serum lutein concentrations of Groups L6 and L12 increased from 0.356 (SD 0.117) to 0.607 (SD 0.176) micromol/l, and from 0.328 (SD 0.120) to 0.733 (SD 0.354) micromol/l, respectively. No statistical changes from baseline were observed in uncorrected visual acuity and best-spectacle corrected visual acuity, whereas there was a trend toward increase in visual acuity in Group L12. Contrast sensitivity in Groups L6 and L12 increased with supplementation, and statistical significance was reached at most visual angles of Group L12. No significant change was observed in glare sensitivity over time. Visual function in healthy subjects who received the lutein supplement improved, especially in contrast sensitivity, suggesting that a higher intake of lutein may have beneficial effects on the visual performance.",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
},
{
"docid": "e9dc75f34b398b4e0d028f4dbbb707d1",
"text": "INTRODUCTION\nUniversity students are potentially important targets for the promotion of healthy lifestyles as this may reduce the risks of lifestyle-related disorders later in life. This cross-sectional study examined differences in eating behaviours, dietary intake, weight status, and body composition between male and female university students.\n\n\nMETHODOLOGY\nA total of 584 students (59.4% females and 40.6% males) aged 20.6 +/- 1.4 years from four Malaysian universities in the Klang Valley participated in this study. Participants completed the Eating Behaviours Questionnaire and two-day 24-hour dietary recall. Body weight, height, waist circumference and percentage of body fat were measured.\n\n\nRESULTS\nAbout 14.3% of males and 22.4% of females were underweight, while 14.0% of males and 12.3% of females were overweight and obese. A majority of the participants (73.8% males and 74.6% females) skipped at least one meal daily in the past seven days. Breakfast was the most frequently skipped meal. Both males and females frequently snacked during morning tea time. Fruits and biscuits were the most frequently consumed snack items. More than half of the participants did not meet the Malaysian Recommended Nutrient Intake (RNI) for energy, vitamin C, thiamine, riboflavin, niacin, iron (females only), and calcium. Significantly more males than females achieved the RNI levels for energy, protein and iron intakes.\n\n\nCONCLUSION\nThis study highlights the presence of unhealthy eating behaviours, inadequate nutrient intake, and a high prevalence of underweight among university students. Energy and nutrient intakes differed between the sexes. Therefore, promoting healthy eating among young adults is crucial to achieve a healthy nutritional status.",
"title": ""
},
{
"docid": "1dc615b299a8a63caa36cd8e36459323",
"text": "Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.",
"title": ""
},
{
"docid": "ac46286c7d635ccdcd41358666026c12",
"text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.",
"title": ""
}
] |
scidocsrr
|
ab115421d84a4bcab680d9dfeb9d9ef6
|
BAG OF REGION EMBEDDINGS VIA LOCAL CONTEXT UNITS FOR TEXT CLASSIFICATION
|
[
{
"docid": "ac46e6176377612544bb74c064feed67",
"text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.",
"title": ""
},
{
"docid": "fe1bc993047a95102f4331f57b1f9197",
"text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.",
"title": ""
},
{
"docid": "c612ee4ad1b4daa030e86a59543ca53b",
"text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.",
"title": ""
}
] |
[
{
"docid": "244c79d374bdbe44406fc514610e4ee7",
"text": "This article surveys some theoretical aspects of cellular automata CA research. In particular, we discuss classical and new results on reversibility, conservation laws, limit sets, decidability questions, universality and topological dynamics of CA. The selection of topics is by no means comprehensive and reflects the research interests of the author. The main goal is to provide a tutorial of CA theory to researchers in other branches of natural computing, to give a compact collection of known results with references to their proofs, and to suggest some open problems. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7918167cbceddcc24b4d22f094b167dd",
"text": "This paper is presented the study of the social influence by using social features in fitness mobile applications and habit that persuades the working-aged people, in the context of continuous fitness mobile application usage to promote the physical activity. Our conceptual model consisted of Habit and Social Influence. The social features based on the Persuasive Technology (1) Normative Influence, (2) Social Comparison, (3) Competition, (4) Co-operation, and (5) Social Recognition were embedded in the Social Influence construct of UTAUT2 model. The questionnaires were an instrument for this study. The target group was 443 working-aged people who live in Thailand's central region. The results reveal that the factors significantly affecting Behavioral Intention toward Use Behavior are Normative Influence, Social Comparison, Competition, and Co-operation. Only the Social Recognition is insignificantly affecting Behavioral Intention to use fitness mobile applications. The Behavioral Intention and Habit also significantly support the Use Behavior. The social features in fitness mobile application should be developed to promote the physical activity.",
"title": ""
},
{
"docid": "c6afc173351fe404f7c5b68d2a0bc0a8",
"text": "BACKGROUND\nCombined traumatic brain injury (TBI) and hemorrhagic shock (HS) is highly lethal. In a nonsurvival model of TBI + HS, addition of high-dose valproic acid (VPA) (300 mg/kg) to hetastarch reduced brain lesion size and associated swelling 6 hours after injury; whether this would have translated into better neurologic outcomes remains unknown. It is also unclear whether lower doses of VPA would be neuroprotective. We hypothesized that addition of low-dose VPA to normal saline (NS) resuscitation would result in improved long-term neurologic recovery and decreased brain lesion size.\n\n\nMETHODS\nTBI was created in anesthetized swine (40-43 kg) by controlled cortical impact, and volume-controlled hemorrhage (40% volume) was induced concurrently. After 2 hours of shock, animals were randomized (n = 5 per group) to NS (3× shed blood) or NS + VPA (150 mg/kg). Six hours after resuscitation, packed red blood cells were transfused, and animals were recovered. Peripheral blood mononuclear cells were analyzed for acetylated histone-H3 at lysine-9. A Neurological Severity Score (NSS) was assessed daily for 30 days. Brain magnetic resonance imaging was performed on Days 3 and 10. Cognitive performance was assessed by training animals to retrieve food from color-coded boxes.\n\n\nRESULTS\nThere was a significant increase in histone acetylation in the NS + VPA-treated animals compared with NS treatment. The NS + VPA group demonstrated significantly decreased neurologic impairment and faster speed of recovery as well as smaller brain lesion size compared with the NS group. Although the final cognitive function scores were similar between the groups, the VPA-treated animals reached the goal significantly faster than the NS controls.\n\n\nCONCLUSION\nIn this long-term survival model of TBI + HS, addition of low-dose VPA to saline resuscitation resulted in attenuated neurologic impairment, faster neurologic recovery, smaller brain lesion size, and a quicker normalization of cognitive functions.",
"title": ""
},
{
"docid": "28075920fae3e973911b299db86c792e",
"text": "DNA methylation is a well-studied genetic modification crucial to regulate the functioning of the genome. Its alterations play an important role in tumorigenesis and tumor-suppression. Thus, studying DNA methylation data may help biomarker discovery in cancer. Since public data on DNA methylation become abundant – and considering the high number of methylated sites (features) present in the genome – it is important to have a method for efficiently processing such large datasets. Relying on big data technologies, we propose BIGBIOCL an algorithm that can apply supervised classification methods to datasets with hundreds of thousands of features. It is designed for the extraction of alternative and equivalent classification models through iterative deletion of selected features. We run experiments on DNA methylation datasets extracted from The Cancer Genome Atlas, focusing on three tumor types: breast, kidney, and thyroid carcinomas. We perform classifications extracting several methylated sites and their associated genes with accurate performance (accuracy>97%). Results suggest that BIGBIOCL can perform hundreds of classification iterations on hundreds of thousands of features in few hours. Moreover, we compare the performance of our method with other state-of-the-art classifiers and with a wide-spread DNA methylation analysis method based on network analysis. Finally, we are able to efficiently compute multiple alternative classification models and extract from DNA-methylation large datasets a set of candidate genes to be further investigated to determine their active role in cancer. BIGBIOCL, results of experiments, and a guide to carry on new experiments are freely available on GitHub at https://github.com/fcproj/BIGBIOCL.",
"title": ""
},
{
"docid": "2568f7528049b4ffc3d9a8b4f340262b",
"text": "We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occuring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparable in classification and generalization.",
"title": ""
},
{
"docid": "75f916790044fab6e267c5c5ec5846b7",
"text": "Detecting circles from a digital image is very important in shape recognition. In this paper, an efficient randomized algorithm (RCD) for detecting circles is presented, which is not based on the Hough transform (HT). Instead of using an accumulator for saving the information of the related parameters in the HT-based methods, the proposed RCD does not need an accumulator. The main concept used in the proposed RCD is that we first randomly select four edge pixels in the image and define a distance criterion to determine whether there is a possible circle in the image; after finding a possible circle, we apply an evidence-collecting process to further determine whether the possible circle is a true circle or not. Some synthetic images with different levels of noises and some realistic images containing circular objects with some occluded circles and missing edges have been taken to test the performance. Experimental results demonstrate that the proposed RCD is faster than other HT-based methods for the noise level between the light level and the modest level. For a heavy noise level, the randomized HT could be faster than the proposed RCD, but at the expense of massive memory requirements.c © 2001 Academic Press",
"title": ""
},
{
"docid": "a50ea2739751249e2832cae2df466d0b",
"text": "The Arabic Online Commentary (AOC) (Zaidan and Callison-Burch, 2011) is a large-scale repository of Arabic dialects with manual labels for 4 varieties of the language. Existing dialect identification models exploiting the dataset pre-date the recent boost deep learning brought to NLP and hence the data are not benchmarked for use with deep learning, nor is it clear how much neural networks can help tease the categories in the data apart. We treat these two limitations: We (1) benchmark the data, and (2) empirically test 6 different deep learning methods on the task, comparing peformance to several classical machine learning models under different conditions (i.e., both binary and multi-way classification). Our experimental results show that variants of (attention-based) bidirectional recurrent neural networks achieve best accuracy (acc) on the task, significantly outperforming all competitive baselines. On blind test data, our models reach 87.65% acc on the binary task (MSA vs. dialects), 87.4% acc on the 3-way dialect task (Egyptian vs. Gulf vs. Levantine), and 82.45% acc on the 4-way variants task (MSA vs. Egyptian vs. Gulf vs. Levantine). We release our benchmark for future work on the dataset.",
"title": ""
},
{
"docid": "53df69bf8750a7e97f12b1fcac14b407",
"text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.",
"title": ""
},
{
"docid": "44e4797655292e97651924115fd8d711",
"text": "Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens’ petitions and stakeholders’ views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens’ opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.",
"title": ""
},
{
"docid": "119ea9c1d6b2cf2063efaf4d5ed7e756",
"text": "In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.",
"title": ""
},
{
"docid": "e0e7bece9dd69ac775824b2ed40965d8",
"text": "In this paper, we consider an adaptive base-stock policy for a single-item inventory system, where the demand process is non-stationary. In particular, the demand process is an integrated moving average process of order (0, 1, 1), for which an exponential-weighted moving average provides the optimal forecast. For the assumed control policy we characterize the inventory random variable and use this to find the safety stock requirements for the system. From this characterization, we see that the required inventory, both in absolute terms and as it depends on the replenishment lead-time, behaves much differently for this case of non-stationary demand compared with stationary demand. We then show how the single-item model extends to a multistage, or supply-chain context; in particular we see that the demand process for the upstream stage is not only non-stationary but also more variable than that for the downstream stage. We also show that for this model there is no value from letting the upstream stages see the exogenous demand. The paper concludes with some observations about the practical implications of this work.",
"title": ""
},
{
"docid": "6a6063c05941c026b083bfcc573520f8",
"text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.",
"title": ""
},
{
"docid": "8c3aaa5011c7974a18b17d2a604127b7",
"text": "The threat of Distributed Denial of Service (DDoS) has become a major issue in network security and is difficult to detect because all DDoS traffics have normal packet characteristics. Various detection and defense algorithms have been studied. One of them is an entropy-based intrusion detection approach that is a powerful and simple way to identify abnormal conditions from network channels. However, the burden of computing information entropy values from heavy flow still exists. To reduce the computing time, we have developed a DDoS detection scheme using a compression entropy method. It allows us to significantly reduce the computation time for calculating information entropy. However, our experiment suggests that the compression entropy approach tends to be too sensitive to verify real network attacks and produces many false negatives. In this paper, we propose a fast entropy scheme that can overcome the issue of false negatives and will not increase the computational time. Our simulation shows that the fast entropy computing method not only reduced computational time by more than 90% compared to conventional entropy, but also increased the detection accuracy compared to conventional and compression entropy approaches.",
"title": ""
},
{
"docid": "0116f3e12fbaf2705f36d658fdbe66bb",
"text": "This paper presents a metric to quantify visual scene movement perceived inside a virtual environment (VE) and illustrates how this method could be used in future studies to determine a cybersickness dose value to predict levels of cybersickness in VEs. Sensory conflict theories predict that cybersickness produced by a VE is a kind of visually induced motion sickness. A comprehensive review indicates that there is only one subjective measure to quantify visual stimuli presented inside a VE. A metric, referred to as spatial velocity (SV), is proposed. It combines objective measures of scene complexity and scene movement velocity. The theoretical basis for the proposed SV metric and the algorithms for its implementation are presented. Data from two previous experiments on cybersickness were reanalyzed using the metric. Results showed that increasing SV by either increasing the scene complexity or scene velocity significantly increased the rated level of cybersickness. A strong correlation between SV and the level of cybersickness was found. The use of the spatial velocity metric to predict levels of cybersickness is also discussed.",
"title": ""
},
{
"docid": "26eb8fc38928446194d0110aca3a8b9c",
"text": "The requirement for high quality pulps which are widely used in paper industries has increased the demand for pulp refining (beating) process. Pulp refining is a promising approach to improve the pulp quality by changing the fiber characteristics. The diversity of research on the effect of refining on fiber properties which is due to the different pulp sources, pulp consistency and refining equipment has interested us to provide a review on the studies over the last decade. In this article, the influence of pulp refining on structural properties i.e., fibrillations, fine formation, fiber length, fiber curl, crystallinity and distribution of surface chemical compositions is reviewed. The effect of pulp refining on electrokinetic properties of fiber e.g., surface and total charges of pulps is discussed. In addition, an overview of different refining theories, refiners as well as some tests for assessing the pulp refining is presented.",
"title": ""
},
{
"docid": "240c47d27533069f339d8eb090a637a9",
"text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "d3a0931c03c80f5aa639cdc0d8cc331b",
"text": "We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.",
"title": ""
},
{
"docid": "574f1eb961c4469a16b4fde10d455ff4",
"text": "To study the fundamental effects of the spinning capsule on the overall performance of a dry powder inhaler (Aerolizer®). The capsule motion was visualized using high-speed photography. Computational fluid dynamics (CFD) analysis was performed to determine the flowfield generated in the device with and without the presence of different sized capsules at 60 l min−1. The inhaler dispersion performance was measured with mannitol powder using a multistage liquid impinger at the same flowrate. The capsule size (3, 4, and 5) was found to make no significant difference to the device flowfield, the particle-device impaction frequency, or the dispersion performance of the inhaler. Reducing the capsule size reduced only the capsule retention by 4%. In contrast, without the presence of the spinning capsule, turbulence levels were increased by 65%, FPFEm (wt% particles ≤6.8 μm in the aerosol referenced against the amount of powder emitted from the device) increased from 59% to 65%, while particle-mouthpiece impaction decreased by 2.5 times. When the powder was dispersed from within compared to from outside the spinning capsule containing four 0.6 mm holes at each end, the FPFEm was increased significantly from 59% to 76%, and the throat retention was dropped from 14% to 6%. The presence, but not the size, of a capsule has significant effects on the inhaler performance. The results suggested that impaction between the particles and the spinning capsule does not play a major role in powder dispersion. However, the capsule can provide additional strong mechanisms of deagglomeration dependent on the size of the capsule hole.",
"title": ""
}
] |
scidocsrr
|
563c09f24750dd82b154ad316ac4d7a4
|
Product Aspect Ranking and Its Applications
|
[
{
"docid": "e677ba3fa8d54fad324add0bda767197",
"text": "In this paper, we present a novel approach for mining opinions from product reviews, where it converts opinion mining task to identify product features, expressions of opinions and relations between them. By taking advantage of the observation that a lot of product features are phrases, a concept of phrase dependency parsing is introduced, which extends traditional dependency parsing to phrase level. This concept is then implemented for extracting relations between product features and expressions of opinions. Experimental evaluations show that the mining task can benefit from phrase dependency parsing.",
"title": ""
}
] |
[
{
"docid": "ab2e5ec6e48c87b3e4814840ad29afe7",
"text": "This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are full parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's spurious ambiguity, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.",
"title": ""
},
{
"docid": "f0fa5907c11c3adb43942cc6d2cfdd47",
"text": "Executive Summary/Abstract:. Timely implementation of a Master Data Management (MDM) Organisation is essential and requires a structured design. This presentation covers the way MDM processes and roles can link the business users to metadata and enable improved master data management , through disciplined organisation alignment and training. As organisations implement integrated ERP systems, the attention on master data is often limited to Data Migration. Therefore, a plan for recovery is frequently required.",
"title": ""
},
{
"docid": "bddea9fd4d14f591e6fb6acc3cc057f1",
"text": "We present an analysis of musical influence using intact lyrics of over 550,000 songs, extending existing research on lyrics through a novel approach using directed networks. We form networks of lyrical influence over time at the level of three-word phrases, weighted by tf-idf. An edge reduction analysis of strongly connected components suggests highly central artist, songwriter, and genre network topologies. Visualizations of the genre network based on multidimensional scaling confirm network centrality and provide insight into the most influential genres at the heart of the network. Next, we present metrics for influence and self-referential behavior, examining their interactions with network centrality and with the genre diversity of songwriters. Here, we uncover a negative correlation between songwriters’ genre diversity and the robustness of their connections. By examining trends among the data for top genres, songwriters, and artists, we address questions related to clustering, influence, and isolation of nodes in the networks. We conclude by discussing promising future applications of lyrical influence networks in music information retrieval research. The networks constructed in this study are made publicly available for research purposes.",
"title": ""
},
{
"docid": "8b4e09bb13d3d01d3954f32cbb4c9e27",
"text": "Higher-level semantics such as visual attributes are crucial for fundamental multimedia applications. We present a novel attribute discovery approach that can automatically identify, model and name attributes from an arbitrary set of image and text pairs that can be easily gathered on the Web. Different from conventional attribute discovery methods, our approach does not rely on any pre-defined vocabularies and human labeling. Therefore, we are able to build a large visual knowledge base without any human efforts. The discovery is based on a novel deep architecture, named Independent Component Multimodal Autoencoder (ICMAE), that can continually learn shared higher-level representations across the visual and textual modalities. With the help of the resultant representations encoding strong visual and semantic evidences, we propose to (a) identify attributes and their corresponding high-quality training images, (b) iteratively model them with maximum compactness and comprehensiveness, and (c) name the attribute models with human understandable words. To date, the proposed system has discovered 1,898 attributes over 1.3 million pairs of image and text. Extensive experiments on various real-world multimedia datasets demonstrate the quality and effectiveness of the discovered attributes, facilitating multimedia applications such as image annotation and retrieval as compared to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "96d8e375616a7ee137276d385c14a18a",
"text": "Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.",
"title": ""
},
{
"docid": "bf257fae514c28dc3b4c31ff656a00e9",
"text": "The objective of the present study is to evaluate the acute effects of low-level laser therapy (LLLT) on functional capacity, perceived exertion, and blood lactate in hospitalized patients with heart failure (HF). Patients diagnosed with systolic HF (left ventricular ejection fraction <45 %) were randomized and allocated prospectively into two groups: placebo LLLT group (n = 10)—subjects who were submitted to placebo laser and active LLLT group (n = 10)—subjects who were submitted to active laser. The 6-min walk test (6MWT) was performed, and blood lactate was determined at rest (before LLLT application and 6MWT), immediately after the exercise test (time 0) and recovery (3, 6, and 30 min). A multi-diode LLLT cluster probe (DMC, São Carlos, Brazil) was used. Both groups increased 6MWT distance after active or placebo LLLT application compared to baseline values (p = 0.03 and p = 0.01, respectively); however, no difference was observed during intergroup comparison. The active LLLT group showed a significant reduction in the perceived exertion Borg (PEB) scale compared to the placebo LLLT group (p = 0.006). In addition, the group that received active LLLT showed no statistically significant difference for the blood lactate level through the times analyzed. The placebo LLLT group demonstrated a significant increase in blood lactate between the rest and recovery phase (p < 0.05). Acute effects of LLLT irradiation on skeletal musculature were not able to improve the functional capacity of hospitalized patients with HF, although it may favorably modulate blood lactate metabolism and reduce perceived muscle fatigue.",
"title": ""
},
{
"docid": "b555bb25c809e47f0f9fc8cec483d794",
"text": "The assessment of oxygen saturation in arterial blood by pulse oximetry (SpO₂) is based on the different light absorption spectra for oxygenated and deoxygenated hemoglobin and the analysis of photoplethysmographic (PPG) signals acquired at two wavelengths. Commercial pulse oximeters use two wavelengths in the red and infrared regions which have different pathlengths and the relationship between the PPG-derived parameters and oxygen saturation in arterial blood is determined by means of an empirical calibration. This calibration results in an inherent error, and pulse oximetry thus has an error of about 4%, which is too high for some clinical problems. We present calibration-free pulse oximetry for measurement of SpO₂, based on PPG pulses of two nearby wavelengths in the infrared. By neglecting the difference between the path-lengths of the two nearby wavelengths, SpO₂ can be derived from the PPG parameters with no need for calibration. In the current study we used three laser diodes of wavelengths 780, 785 and 808 nm, with narrow spectral line-width. SaO₂ was calculated by using each pair of PPG signals selected from the three wavelengths. In measurements on healthy subjects, SpO₂ values, obtained by the 780-808 nm wavelength pair were found to be in the normal range. The measurement of SpO₂ by two nearby wavelengths in the infrared with narrow line-width enables the assessment of SpO₂ without calibration.",
"title": ""
},
{
"docid": "9bfe782c94805544051a3dcb522d7a2c",
"text": "In this paper, we propose an algorithm to predict the social popularity (i.e., the numbers of views, comments, and favorites) of content on social networking services using only text annotations. Instead of analyzing image/video content, we try to estimate social popularity by a combination of weight vectors obtained from a support vector regression (SVR) and tag frequency. Since our proposed algorithm uses text annotations instead of image/video features, its computational cost is small. As a result, we can estimate social popularity more efficiently than previously proposed methods. Furthermore, tags that significantly affect social popularity can be extracted using our algorithm. Our experiments involved using one million photos on the social networking website Flickr, and the results showed a high correlation between actual social popularity and the determination thereof using our algorithm. Moreover, the proposed algorithm can achieve high classification accuracy with regard to a classification between popular and unpopular content.",
"title": ""
},
{
"docid": "83c35d9d7df9fcf9d5f93b82466a6bbe",
"text": "In a cable-driven parallel robot, elastic cables are used to manipulate the end effector in the workspace. In this paper we present a dynamic analysis and system identification for the complete actuator unit of a cable robot including servo controller, winch, cable, cable force sensor and field bus communication. We establish a second-order system with dead time as an analagous model. Based on this investigation, we propose the design and stability analysis of a cable force controller. We present the implementation of feed-forward and integral controllers based on a stiffness model of the cables. As the platform position is not observable the challenge is to control the cable force while maintaining the positional accuracy. Experimental evaluation of the force controller shows, that the absolute positional accuracy is even improved.",
"title": ""
},
{
"docid": "cd9552d9891337f7e58b3e7e36dfab54",
"text": "Multi-variant program execution is an application of n-version programming, in which several slightly different instances of the same program are executed in lockstep on a multiprocessor. These variants are created in such a way that they behave identically under \"normal\" operation and diverge when \"out of specification\" events occur, which may be indicative of attacks. This paper assess the effectiveness of different code variation techniques to address different classes of vulnerabilities. In choosing a variant or combination of variants, security demands need to be balanced against runtime overhead. Our study indicates that a good combination of variations when running two variants is to choose one of instruction set randomization, system call number randomization, and register randomization, and use that together with library entry point randomization. Running more variants simultaneously makes it exponentially more difficult to take over the system.",
"title": ""
},
{
"docid": "129dd084e485da5885e2720a4bddd314",
"text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.",
"title": ""
},
{
"docid": "0687cc3d9df74b2ff1dd94d55b773493",
"text": "What should I wear? We present Magic Mirror, a virtual fashion consultant, which can parse, appreciate and recommend the wearing. Magic Mirror is designed with a large display and Kinect to simulate the real mirror and interact with users in augmented reality. Internally, Magic Mirror is a practical appreciation system for automatic aesthetics-oriented clothing analysis. Specifically, we focus on the clothing collocation rather than the single one, the style (aesthetic words) rather than the visual features. We bridge the gap between the visual features and aesthetic words of clothing collocation to enable the computer to learn appreciating the clothing collocation. Finally, both object and subject evaluations verify the effectiveness of the proposed algorithm and Magic Mirror system.",
"title": ""
},
{
"docid": "13173c37670511963b23a42a3cc7e36b",
"text": "In patients having a short nose with a short septal length and/or severe columellar retraction, a septal extension graft is a good solution, as it allows the dome to move caudally and pushes down the columellar base. Fixing the medial crura of the alar cartilages to a septal extension graft leads to an uncomfortably rigid nasal tip and columella, and results in unnatural facial animation. Further, because of the relatively small and weak septal cartilage in the East Asian population, undercorrection of a short nose is not uncommon. To overcome these shortcomings, we have used the septal extension graft combined with a derotation graft. Among 113 patients who underwent the combined procedure, 82 patients had a short nose deformity alone; the remaining 31 patients had a short nose with columellar retraction. Thirty-two patients complained of nasal tip stiffness caused by a septal extension graft from previous operations. In addition to the septal extension graft, a derotation graft was used for bridging the gap between the alar cartilages and the septal extension graft for tip lengthening. Satisfactory results were obtained in 102 (90%) patients. Eleven (10%) patients required revision surgery. This combination method is a good surgical option for patients who have a short nose with small septal cartilages and do not have sufficient cartilage for tip lengthening by using a septal extension graft alone. It can also overcome the postoperative nasal tip rigidity of a septal extension graft.",
"title": ""
},
{
"docid": "1b314c55b86355e1fd0ef5d5ce9a89ba",
"text": "3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance. Our prototype outputs the final decomposed parts with customized connectors on the interfaces. We demonstrate the effectiveness of Chopper on a variety of non-trivial real-world objects.",
"title": ""
},
{
"docid": "5cc07ca331deb81681b3f18355c0e586",
"text": "BACKGROUND\nHyaluronic acid (HA) formulations are used for aesthetic applications. Different cross-linking technologies result in HA dermal fillers with specific characteristic visco-elastic properties.\n\n\nOBJECTIVE\nBio-integration of three CE-marked HA dermal fillers, a cohesive (monophasic) polydensified, a cohesive (monophasic) monodensified and a non-cohesive (biphasic) filler, was analysed with a follow-up of 114 days after injection. Our aim was to study the tolerability and inflammatory response of these fillers, their patterns of distribution in the dermis, and influence on tissue integrity.\n\n\nMETHODS\nThree HA formulations were injected intradermally into the iliac crest region in 15 subjects. Tissue samples were analysed after 8 and 114 days by histology and immunohistochemistry, and visualized using optical and transmission electron microscopy.\n\n\nRESULTS\nHistological results demonstrated that the tested HA fillers showed specific characteristic bio-integration patterns in the reticular dermis. Observations under the optical and electron microscopes revealed morphological conservation of cutaneous structures. Immunohistochemical results confirmed absence of inflammation, immune response and granuloma.\n\n\nCONCLUSION\nThe three tested dermal fillers show an excellent tolerability and preservation of the dermal cells and matrix components. Their tissue integration was dependent on their visco-elastic properties. The cohesive polydensified filler showed the most homogeneous integration with an optimal spreading within the reticular dermis, which is achieved by filling even the smallest spaces between collagen bundles and elastin fibrils, while preserving the structural integrity of the latter. Absence of adverse reactions confirms safety of the tested HA dermal fillers.",
"title": ""
},
{
"docid": "8c0cbfc060b3a6aa03fd8305baf06880",
"text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.",
"title": ""
},
{
"docid": "1eb2715d2dfec82262c7b3870db9b649",
"text": "Leadership is a crucial component to the success of academic health science centers (AHCs) within the shifting U.S. healthcare environment. Leadership talent acquisition and development within AHCs is immature and approaches to leadership and its evolution will be inevitable to refine operations to accomplish the critical missions of clinical service delivery, the medical education continuum, and innovations toward discovery. To reach higher organizational outcomes in AHCs requires a reflection on what leadership approaches are in place and how they can better support these missions. Transactional leadership approaches are traditionally used in AHCs and this commentary suggests that movement toward a transformational approach is a performance improvement opportunity for AHC leaders. This commentary describes the transactional and transformational approaches, how they complement each other, and how to access the transformational approach. Drawing on behavioral sciences, suggestions are made on how a transactional leader can change her cognitions to align with the four dimensions of the transformational leadership approach.",
"title": ""
},
{
"docid": "9818399b4c119b58723c59e76bbfc1bd",
"text": "Many vertex-centric graph algorithms can be expressed using asynchronous parallelism by relaxing certain read-after-write data dependences and allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithms to converge. Although by using bounded staleness we can restrict the slowdown in the rate of convergence, this also restricts the ability to tolerate communication latency. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. This is achieved via a coordinated use of best effort refresh policy and bounded staleness. We demonstrate that for a range of asynchronous graph algorithms and PDE solvers, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach frequently outperforms GraphLab, a popular distributed graph processing framework.",
"title": ""
},
{
"docid": "388a8494d6aa7b51d9567bd2e401f3ce",
"text": "An appropriate image representation induces some good image treatment algorithms. Hypergraph theory is a theory of finite combinatorial sets, modeling a lot of problems of operational research and combinatorial optimization. Hypergraphs are now used in many domains such as chemistry, engineering and image processing. We present an overview of a hypergraph-based picture representation giving much application in picture manipulation, analysis and restoration: the Image Adaptive Neighborhood Hypergraph (IANH). With the IANH it is possible to build powerful noise detection an elimination algorithm, but also to make some edges detection or some image segmentation. IANH has various applications and this paper presents a survey of them.",
"title": ""
},
{
"docid": "934b1a0959389d32382978cdd411ba87",
"text": "Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like \"bleed\" and \"punch\" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.",
"title": ""
}
] |
scidocsrr
|
7a4fcb24bbaec04b6699f8dd33a65836
|
Mental Health Problems in University Students : A Prevalence Study
|
[
{
"docid": "1497e47ada570797e879bbc4aba432a1",
"text": "The mental health of university students is an area of increasing concern worldwide. The objective of this study is to examine the prevalence of depression, anxiety and stress among a group of Turkish university students. Depression Anxiety and Stress Scale (DASS-42) completed anonymously in the students’ respective classrooms by 1,617 students. Depression, anxiety and stress levels of moderate severity or above were found in 27.1, 47.1 and 27% of our respondents, respectively. Anxiety and stress scores were higher among female students. First- and second-year students had higher depression, anxiety and stress scores than the others. Students who were satisfied with their education had lower depression, anxiety and stress scores than those who were not satisfied. The high prevalence of depression, anxiety and stress symptoms among university students is alarming. This shows the need for primary and secondary prevention measures, with the development of adequate and appropriate support services for this group.",
"title": ""
}
] |
[
{
"docid": "0ef6e54d7190dde80ee7a30c5ecae0c3",
"text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.",
"title": ""
},
{
"docid": "0fba05a38cb601a1b08e6105e6b949c1",
"text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.",
"title": ""
},
{
"docid": "f1df8b69dfec944b474b9b26de135f55",
"text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.",
"title": ""
},
{
"docid": "28d19824a598ae20039f2ed5d8885234",
"text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.",
"title": ""
},
{
"docid": "a574355d46c6e26efe67aefe2869a0cb",
"text": "The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes, and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this article, we provide a structured and comprehensive overview of data mining techniques for modeling EHRs. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research.",
"title": ""
},
{
"docid": "02e63f2279dbd980c6689bec5ea18411",
"text": "Reflection photoplethysmography (PPG) using 530 nm (green) wavelength light has the potential to be a superior method for monitoring heart rate (HR) during normal daily life due to its relative freedom from artifacts. However, little is known about the accuracy of pulse rate (PR) measured by 530 nm light PPG during motion. Therefore, we compared the HR measured by electrocadiography (ECG) as a reference with PR measured by 530, 645 (red), and 470 nm (blue) wavelength light PPG during baseline and while performing hand waving in 12 participants. In addition, we examined the change of signal-to-noise ratio (SNR) by motion for each of the three wavelengths used for the PPG. The results showed that the limit of agreement in Bland-Altman plots between the HR measured by ECG and PR measured by 530 nm light PPG (±0.61 bpm) was smaller than that achieved when using 645 and 470 nm light PPG (±3.20 bpm and ±2.23 bpm, respectively). The ΔSNR (the difference between baseline and task values) of 530 and 470nm light PPG was significantly smaller than ΔSNR for red light PPG. In conclusion, 530 nm light PPG could be a more suitable method than 645 and 470nm light PPG for monitoring HR in normal daily life.",
"title": ""
},
{
"docid": "5ccf0b3f871f8362fccd4dbd35a05555",
"text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.",
"title": ""
},
{
"docid": "736ee2bed70510d77b1f9bb13b3bee68",
"text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.",
"title": ""
},
{
"docid": "c60c83c93577377bad43ed1972079603",
"text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module",
"title": ""
},
{
"docid": "57e9467bfbc4e891acd00dcdac498e0e",
"text": "Cross-cultural perspectives have brought renewed interest in the social aspects of the self and the extent to which individuals define themselves in terms of their relationships to others and to social groups. This article provides a conceptual review of research and theory of the social self, arguing that the personal, relational, and collective levels of self-definition represent distinct forms of selfrepresentation with different origins, sources of self-worth, and social motivations. A set of 3 experiments illustrates haw priming of the interpersonal or collective \"we\" can alter spontaneous judgments of similarity and self-descriptions.",
"title": ""
},
{
"docid": "e50c921d664f970daa8050bad282e066",
"text": "In the complex decision-environments that characterize e-business settings, it is important to permit decision-makers to proactively manage data quality. In this paper we propose a decision-support framework that permits decision-makers to gauge quality both in an objective (context-independent) and in a context-dependent manner. The framework is based on the information product approach and uses the Information Product Map (IPMAP). We illustrate its application in evaluating data quality using completeness—a data quality dimension that is acknowledged as important. A decision-support tool (IPView) for managing data quality that incorporates the proposed framework is also described. D 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "01c267fbce494fcfabeabd38f18c19a3",
"text": "New insights in the programming physics of silicided polysilicon fuses integrated in 90 nm CMOS have led to a programming time of 100 ns, while achieving a resistance increase of 107. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained",
"title": ""
},
{
"docid": "e875d4a88e73984e37f5ce9ffe543791",
"text": "A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.",
"title": ""
},
{
"docid": "829eafadf393a66308db452eeef617d5",
"text": "The goal of creating non-biological intelligence has been with us for a long time, predating the nominal 1956 establishment of the field of artificial intelligence by centuries or, under some definitions, even by millennia. For much of this history it was reasonable to recast the goal of “creating” intelligence as that of “designing” intelligence. For example, it would have been reasonable in the 17th century, as Leibnitz was writing about reasoning as a form of calculation, to think that the process of creating artificial intelligence would have to be something like the process of creating a waterwheel or a pocket watch: first understand the principles, then use human intelligence to devise a design based on the principles, and finally build a system in accordance with the design. At the dawn of the 19th century William Paley made such assumptions explicit, arguing that intelligent designers are necessary for the production of complex adaptive systems. And then, of course, Paley was soundly refuted by Charles Darwin in 1859. Darwin showed how complex and adaptive systems can arise naturally from a process of selection acting on random variation. That is, he showed that complex and adaptive design could be created without an intelligent designer. On the basis of evidence from paleontology, molecular biology, and evolutionary theory we now understand that nearly all of the interesting features of biological agents, including intelligence, have arisen through roughly Darwinian evolutionary processes (with a few important refinements, some of which are mentioned below). But there are still some holdouts for the pre-Darwinian view. A recent survey in the United States found that 42% of respondents expressed a belief that “Life on Earth has existed in its present form since the beginning of time” [7], and these views are supported by powerful political forces including a stridently anti-science President. These shocking political realities are, however, beyond the scope of the present essay. This essay addresses a more subtle form of pre-Darwinian thinking that occurs even among the scientifically literate, and indeed even among highly trained scientists conducting advanced AI research. Those who engage in this form of pre-Darwinian thinking accept the evidence for the evolution of terrestrial life but ignore or even explicitly deny the power of evolutionary processes to produce adaptive complexity in other contexts. Within the artificial intelligence research community those who engage in this form of thinking ignore or deny the power of evolutionary processes to create machine intelligence. Before exploring this complaint further it is worth asking whether an evolved artificial intelligence would even serve the broader goals of AI as a field. Every AI text opens by defining the field, and some of the proffered definitions are explicitly oriented toward design—presumably design by intelligent humans. For example Dean et al. define AI as “the design and study of computer programs that behave intelligently” [2, p. 1]. Would the field, so defined, be served by the demonstration of an evolved artificial intelligence? It would insofar as we could study the evolved system and particularly if we could use our resulting understanding as the basis for future designs. So even the most design-oriented AI researchers should be interested in evolved artificial intelligence if it can in fact be created.",
"title": ""
},
{
"docid": "8d176debd26505d424dcbf8f5cfdb4d1",
"text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.",
"title": ""
},
{
"docid": "97b578720957155514ca9fbe68c03eed",
"text": "Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors.",
"title": ""
},
{
"docid": "52c1300a818340065ca16d02343f13fe",
"text": "Article history: Received 9 September 2014 Received in revised form 25 January 2015 Accepted 9 February 2015 Available online xxxx",
"title": ""
},
{
"docid": "419499ced8902a00909c32db352ea7f5",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
},
{
"docid": "186d9fc899fdd92c7e74615a2a054a03",
"text": "In this paper, we propose an illumination-robust face recognition system via local directional pattern images. Usually, local pattern descriptors including local binary pattern and local directional pattern have been used in the field of the face recognition and facial expression recognition, since local pattern descriptors have important properties to be robust against the illumination changes and computational simplicity. Thus, this paper represents the face recognition approach that employs the local directional pattern descriptor and twodimensional principal analysis algorithms to achieve enhanced recognition accuracy. In particular, we propose a novel methodology that utilizes the transformed image obtained from local directional pattern descriptor as the direct input image of two-dimensional principal analysis algorithms, unlike that most of previous works employed the local pattern descriptors to acquire the histogram features. The performance evaluation of proposed system was performed using well-known approaches such as principal component analysis and Gabor-wavelets based on local binary pattern, and publicly available databases including the Yale B database and the CMU-PIE database were employed. Through experimental results, the proposed system showed the best recognition accuracy compared to different approaches, and we confirmed the effectiveness of the proposed method under varying lighting conditions.",
"title": ""
},
{
"docid": "6fc870c703611e07519ce5fe956c15d1",
"text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"title": ""
}
] |
scidocsrr
|
626257ecf74e8d0fa7476b5f12b7c2ff
|
ERNN: A Biologically Inspired Feedforward Neural Network to Discriminate Emotion From EEG Signal
|
[
{
"docid": "4b284736c51435f9ab6f52f174dc7def",
"text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.",
"title": ""
},
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
},
{
"docid": "34257e8924d8f9deec3171589b0b86f2",
"text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.",
"title": ""
}
] |
[
{
"docid": "c23a86bc6d8011dab71ac5e1e2051c3b",
"text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.",
"title": ""
},
{
"docid": "3f9139e961f00e6f2cec14dbb0e94683",
"text": "WebQuestions and SimpleQuestions are two benchmark data-sets commonly used in recent knowledge-based question answering (KBQA) work. Most questions in them are ‘simple’ questions which can be answered based on a single relation in the knowledge base. Such data-sets lack the capability of evaluating KBQA systems on complicated questions. Motivated by this issue, we release a new data-set, namely ComplexQuestions1, aiming to measure the quality of KBQA systems on ‘multi-constraint’ questions which require multiple knowledge base relations to get the answer. Beside, we propose a novel systematic KBQA approach to solve multi-constraint questions. Compared to state-of-the-art methods, our approach not only obtains comparable results on the two existing benchmark data-sets, but also achieves significant improvements on the ComplexQuestions.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "2319dccdb7635a23ab702f10788ea09f",
"text": "The molecular basis of obligate anaerobiosis is not well established. Bacteroides thetaiotaomicron is an opportunistic pathogen that cannot grow in fully aerobic habitats. Because microbial niches reflect features of energy-producing strategies, we suspected that aeration would interfere with its central metabolism. In anaerobic medium, this bacterium fermented carbohydrates to a mixture of succinate, propionate and acetate. When cultures were exposed to air, the formation of succinate and propionate ceased abruptly. In vitro analysis demonstrated that the fumarase of the succinate-propionate pathway contains an iron-sulphur cluster that is sensitive to superoxide. In vivo, fumarase activity fell to < 5% when cells were aerated; virtually all activity was recovered after extracts were chemically treated to rebuild iron-sulphur clusters. Aeration minimally affected the remainder of this pathway. However, aeration reduced pyruvate:ferredoxin oxidoreductase (PFOR), the first enzyme in the acetate fermentation branch, to 3% of its anaerobic activity. This cluster-containing enzyme was damaged in vitro by molecular oxygen but not by superoxide. Thus, aerobic growth is precluded by the vulnerability of these iron-sulphur cluster enzymes to oxidation. Importantly, both enzymes were maintained in a stable, inactive form for long periods in aerobic cells; they were then rapidly repaired when the bacterium was returned to anaerobic medium. This result explains how this pathogen can easily recover from occasional exposure to oxygen.",
"title": ""
},
{
"docid": "c138108f567d7f2dd130b6209b11caef",
"text": "Autotuning using relay feedback is widely used to identify low order integrating plus dead time (IPDT) systems as the method is simple and is operated in closed-loop without interrupting the production process. Oscillatory responses from the process due to ideal relay input are collected to calculate ultimate properties of the system that in turn are used to model the responses as functions of system model parameters. These theoretical models of relay response are validated. After adjusting the phase shift, input and output responses are used to find land mark points that are used to formulate algorithms for parameter estimation of the process model. The method is even applicable to distorted relay responses due to load disturbance or measurement noise. Closed-loop simulations are carried out using model based control strategy and performances are calculated.",
"title": ""
},
{
"docid": "5c3137529a63c0c1ba45c22b292f3008",
"text": "Information extraction by text segmentation (IETS) applies to cases in which data values of interest are organized in implicit semi-structured records available in textual sources (e.g. postal addresses, bibliographic information, ads). It is an important practical problem that has been frequently addressed in the recent literature. In this paper we introduce ONDUX (On Demand Unsupervised Information Extraction), a new unsupervised probabilistic approach for IETS. As other unsupervised IETS approaches, ONDUX relies on information available on pre-existing data to associate segments in the input string with attributes of a given domain. Unlike other approaches, we rely on very effective matching strategies instead of explicit learning strategies. The effectiveness of this matching strategy is also exploited to disambiguate the extraction of certain attributes through a reinforcement step that explores sequencing and positioning of attribute values directly learned on-demand from test data, with no previous human-driven training, a feature unique to ONDUX. This assigns to ONDUX a high degree of flexibility and results in superior effectiveness, as demonstrated by the experimental evaluation we report with textual sources from different domains, in which ONDUX is compared with a state-of-art IETS approach.",
"title": ""
},
{
"docid": "2d1290b3cee0bcbc3a1448046bea10aa",
"text": "Photometric stereo using unorganized Internet images is very challenging, because the input images are captured under unknown general illuminations, with uncontrolled cameras. We propose to solve this difficult problem by a simple yet effective approach that makes use of a coarse shape prior. The shape prior is obtained from multi-view stereo and will be useful in twofold: resolving the shape-light ambiguity in uncalibrated photometric stereo and guiding the estimated normals to produce the high quality 3D surface. By assuming the surface albedo is not highly contrasted, we also propose a novel linear approximation of the nonlinear camera responses with our normal estimation algorithm. We evaluate our method using synthetic data and demonstrate the surface improvement on real data over multi-view stereo results.",
"title": ""
},
{
"docid": "f1c80c3e266029012390c6ac47765cc6",
"text": "Whenever clients shop in the Internet, they provide identifying data of themselves to parties like the webshop, shipper and payment system. These identifying data merged with their shopping history might be misused for targeted advertisement up to possible manipulations of the clients. The data also contains credit card or bank account numbers, which may be used for unauthorized money transactions by the involved parties or by criminals hacking the parties’ computing infrastructure. In order to minimize these risks, we propose an approach for anonymous shopping by separation of data. We argue for the feasibility of our approach by discussing important operations like simple reclamation cases and criminal investigations. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "5aab6cd36899f3d5e3c93cf166563a3e",
"text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.",
"title": ""
},
{
"docid": "c71a5f23d9d8b9093ca1b2ccdb3d396a",
"text": "1 M.Tech. Student 2 Assistant Professor 1,2 Department of Computer Science and Engineering 1,2 Don Bosco Institute of Technology, Affiliated by VTU Abstract— In the recent years Sentiment analysis (SA) has gained momentum by the increase of social networking sites. Sentiment analysis has been an important topic for data mining, social media for classifying reviews and thereby rating the entities such as products, movies etc. This paper represents a comparative study of sentiment classification of lexicon based approach and naive bayes classifier of machine learning in sentiment analysis.",
"title": ""
},
{
"docid": "17cdb26d3fd4e915341b21fcf85606c8",
"text": "Persistent occiput posterior (OP) is associated with increased rates of maternal and newborn morbidity. Its diagnosis by physical examination is challenging but is improved with bedside ultrasonography. Occiput posterior discovered in the active phase or early second stage of labor usually resolves spontaneously. When it does not, prophylactic manual rotation may decrease persistent OP and its associated complications. When delivery is indicated for arrest of descent in the setting of persistent OP, a pragmatic approach is suggested. Suspected fetal macrosomia, a biparietal diameter above the pelvic inlet or a maternal pelvis with android features should prompt cesarean delivery. Nonrotational operative vaginal delivery is appropriate when the maternal pelvis has a narrow anterior segment but ample room posteriorly, like with anthropoid features. When all other conditions are met and the fetal head arrests in an OP position in a patient with gynecoid pelvic features and ample room anteriorly, options include cesarean delivery, nonrotational operative vaginal delivery, and rotational procedures, either manual or with the use of rotational forceps. Recent literature suggests that maternal and fetal outcomes with rotational forceps are better than those reported in older series. Although not without significant challenges, a role remains for teaching and practicing selected rotational forceps operations in contemporary obstetrics.",
"title": ""
},
{
"docid": "663342554879c5464a7e1aff969339b7",
"text": "Esthetic surgery of external female genitalia remains an uncommon procedure. This article describes a novel, de-epithelialized, labial rim flap technique for labia majora augmentation using de-epithelialized labia minora tissue otherwise to be excised as an adjunct to labia minora reduction. Ten patients were included in the study. The protruding segments of the labia minora were de-epithelialized with a fine scissors or scalpel instead of being excised, and a bulky section of subcutaneous tissue was obtained. Between the outer and inner surfaces of the labia minora, a flap with a subcutaneous pedicle was created in continuity with the de-epithelialized marginal tissue. A pocket was dissected in the labium majus, and the flap was transposed into the pocket to augment the labia majora. Mean patient age was 39.9 (±13.9) years, mean operation time was 60 min, and mean follow-up period was 14.5 (±3.4) months. There were no major complications (hematoma, wound dehiscence, infection) following surgery. No patient complained of postoperative difficulty with coitus or dyspareunia. All patients were satisfied with the final appearance. Several methods for labia minora reduction have been described. Auxiliary procedures are required with labia minora reduction for better results. Nevertheless, few authors have taken into account the final esthetic appearance of the whole female external genitalia. The described technique in this study is indicated primarily for mild atrophy of the labia majora with labia minora hypertrophy; the technique resulted in perfect patient satisfaction with no major complications or postoperative coital problems. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "0a335ec3a17c202e92341b51a90d9f61",
"text": "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateof-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"title": ""
},
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
},
{
"docid": "3487dcd4c0e609b3683175ce5b056563",
"text": "Various surgical techniques are available in the management of pilonidal sinus, but controversy concerning the optimal surgical approach persists. The present study analyzes the outcome of unroofing and curettage as the primary intervention for acute and chronic pilonidal disease. A total of 297 consecutive patients presenting with chronic disease, acute abscess, or recurrent disease were treated with unroofing and curettage. The wound was left open to heal by secondary intention. Hospitalization, time required to resume daily activities and return to work, healing time, and recurrence rates were recorded. All patients were discharged within the first 24 h after operation. The median period before returning to work was 3.2 ± 1.2 days, and the mean time for wound healing was 5.4 ± 1.1 weeks. Six patients were readmitted with recurrence of the disease within the first six postoperative months. All recurrences were in patients who did not follow the wound care advice and who did not come to regular weekly appointments. Patients with recurrence underwent repeat surgery by the same technique with good results. Unroofing and curettage for pilonidal sinus disease is an easy and effective technique. The vast majority of the patients, including those with abscess as well as those with chronic disease, will heal with this simple procedure, after which even recurrences can be managed successfully with the same procedure. Relying on these results, we advocate unroofing and curettage as the procedure of choice in the management of pilonidal disease.",
"title": ""
},
{
"docid": "ce463006a11477c653c15eb53f673837",
"text": "This paper presents a meaning-based statistical math word problem (MWP) solver with understanding, reasoning and explanation. It comprises a web user interface and pipelined modules for analysing the text, transforming both body and question parts into their logic forms, and then performing inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating the extracted math quantity with its associated syntactic and semantic information (which specifies the physical meaning of that quantity). Those role-tags are then used to identify the desired operands and filter out irrelevant quantities (so that the answer can be obtained precisely). Since the physical meaning of each quantity is explicitly represented with those role-tags and used in the inference process, the proposed approach could explain how the answer is obtained in a human comprehensible way.",
"title": ""
},
{
"docid": "0aa7a61ae2d73b017b5acdd885d7c0ef",
"text": "3GPP Long Term Evolution-Advanced (LTE-A) aims at enhancement of LTE performance in many respects including the system capacity and network coverage. This enhancement can be accomplished by heterogeneous networks (HetNets) where additional micro-nodes that require lower transmission power are efficiently deployed. More careful management of mobility and handover (HO) might be required in HetNets compared to homogeneous networks where all nodes require the same transmission power. In this article, we provide a technical overview of mobility and HO management for HetNets in LTEA. Moreover, we investigate the A3-event which requires a certain criterion to be met for HO. The criterion involves the reference symbol received power/quality of user equipment (UE), hysteresis margin, and a number of offset parameters based on proper HO timing, i.e., time-to-trigger (TTT). Optimum setting of these parameters are not trivial task, and has to be determined depending on UE speed, propagation environment, system load, deployed HetNets configuration, etc. Therefore, adaptive TTT values with given hysteresis margin for the lowest ping pong rate within 2 % of radio link failure rate depending on UE speed and deployed HetNets configuration are investigated in this article.",
"title": ""
},
{
"docid": "fd11fbed7a129e3853e73040cbabb56c",
"text": "A digitally modulated power amplifier (DPA) in 1.2 V 0.13 mum SOI CMOS is presented, to be used as a building block in multi-standard, multi-band polar transmitters. It performs direct amplitude modulation of an input RF carrier by digitally controlling an array of 127 unary-weighted and three binary-weighted elementary gain cells. The DPA is based on a novel two-stage topology, which allows seamless operation from 800 MHz through 2 GHz, with a full-power efficiency larger than 40% and a 25.2 dBm maximum envelope power. Adaptive digital predistortion is exploited for DPA linearization. The circuit is thus able to reconstruct 21.7 dBm WCDMA/EDGE signals at 1.9 GHz with 38% efficiency and a higher than 10 dB margin on all spectral specifications. As a result of the digital modulation technique, a higher than 20.1 % efficiency is guaranteed for WCDMA signals with a peak-to-average power ratio as high as 10.8 dB. Furthermore, a 15.3 dBm, 5 MHz WiMAX OFDM signal is successfully reconstructed with a 22% efficiency and 1.53% rms EVM. A high 10-bit nominal resolution enables a wide-range TX power control strategy to be implemented, which greatly minimizes the quiescent consumption down to 10 mW. A 16.4% CDMA average efficiency is thus obtained across a > 70 dB power control range, while complying with all the spectral specifications.",
"title": ""
},
{
"docid": "d5bc3147e23f95a070bce0f37a96c2a8",
"text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.",
"title": ""
}
] |
scidocsrr
|
1a954f582f8660d7acb410cebfe1a9d1
|
Big Data: Understanding Big Data
|
[
{
"docid": "f35d164bd1b19f984b10468c41f149e3",
"text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.",
"title": ""
},
{
"docid": "cd35602ecb9546eb0f9a0da5f6ae2fdf",
"text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language HiveQL, which are compiled into map-reduce jobs executed on Hadoop. In addition, HiveQL supports custom map-reduce scripts to be plugged into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog, Hive-Metastore, containing schemas and statistics, which is useful in data exploration and query optimization. In Facebook, the Hive warehouse contains several thousand tables with over 700 terabytes of data and is being used extensively for both reporting and ad-hoc analyses by more than 100 users. The rest of the paper is organized as follows. Section 2 describes the Hive data model and the HiveQL language with an example. Section 3 describes the Hive system architecture and an overview of the query life cycle. Section 4 provides a walk-through of the demonstration. We conclude with future work in Section 5.",
"title": ""
},
{
"docid": "0281c96d3990df1159d58c6b5707b1ad",
"text": "In the Big Data community, MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is the high scalability of the MapReduce paradigm which allows for massively parallel and distributed execution over a large number of computing nodes. This paper identifies MapReduce issues and challenges in handling Big Data with the objective of providing an overview of the field, facilitating better planning and management of Big Data projects, and identifying opportunities for future research in this field. The identified challenges are grouped into four main categories corresponding to Big Data tasks types: data storage (relational databases and NoSQL stores), Big Data analytics (machine learning and interactive analytics), online processing, and security and privacy. Moreover, current efforts aimed at improving and extending MapReduce to address identified challenges are presented. Consequently, by identifying issues and challenges MapReduce faces when handling Big Data, this study encourages future Big Data research.",
"title": ""
}
] |
[
{
"docid": "ce1db3eefae52f447eaac1b0e923054f",
"text": "Agriculture and urban activities are major sources of phosphorus and nitrogen to aquatic ecosystems. Atmospheric deposition further contributes as a source of N. These nonpoint inputs of nutrients are difficult to measure and regulate because they derive from activities dispersed over wide areas of land and are variable in time due to effects of weather. In aquatic ecosystems, these nutrients cause diverse problems such as toxic algal blooms, loss of oxygen, fish kills, loss of biodiversity (including species important for commerce and recreation), loss of aquatic plant beds and coral reefs, and other problems. Nutrient enrichment seriously degrades aquatic ecosystems and impairs the use of water for drinking, industry, agriculture, recreation, and other purposes. Based on our review of the scientific literature, we are certain that (1) eutrophication is a widespread problem in rivers, lakes, estuaries, and coastal oceans, caused by overenrichment with P and N; (2) nonpoint pollution, a major source of P and N to surface waters of the United States, results primarily from agriculture and urban activity, including industry; (3) inputs of P and N to agriculture in the form of fertilizers exceed outputs in produce in the United States and many other nations; (4) nutrient flows to aquatic ecosystems are directly related to animal stocking densities, and under high livestock densities, manure production exceeds the needs of crops to which the manure is applied; (5) excess fertilization and manure production cause a P surplus to accumulate in soil, some of which is transported to aquatic ecosystems; and (6) excess fertilization and manure production on agricultural lands create surplus N, which is mobile in many soils and often leaches to downstream aquatic ecosystems, and which can also volatilize to the atmosphere, redepositing elsewhere and eventually reaching aquatic ecosystems. If current practices continue, nonpoint pollution of surface waters is virtually certain to increase in the future. Such an outcome is not inevitable, however, because a number of technologies, land use practices, and conservation measures are capable of decreasing the flow of nonpoint P and N into surface waters. From our review of the available scientific information, we are confident that: (1) nonpoint pollution of surface waters with P and N could be reduced by reducing surplus nutrient flows in agricultural systems and processes, reducing agricultural and urban runoff by diverse methods, and reducing N emissions from fossil fuel burning; and (2) eutrophication can be reversed by decreasing input rates of P and N to aquatic ecosystems, but rates of recovery are highly variable among water bodies. Often, the eutrophic state is persistent, and recovery is slow.",
"title": ""
},
{
"docid": "dbe2d8bcbebfe3747b977ab5216d277e",
"text": "Zero-shot methods in language, vision and other domains rely on a cross-space mapping function that projects vectors from the relevant feature space (e.g., visualfeature-based image representations) to a large semantic word space (induced in an unsupervised way from corpus data), where the entities of interest (e.g., objects images depict) are labeled with the words associated to the nearest neighbours of the mapped vectors. Zero-shot cross-space mapping methods hold great promise as a way to scale up annotation tasks well beyond the labels in the training data (e.g., recognizing objects that were never seen in training). However, the current performance of cross-space mapping functions is still quite low, so that the strategy is not yet usable in practical applications. In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. In this way, we attain large improvements over the state of the art, both in cross-linguistic (word translation) and cross-modal (image labeling) zero-shot experiments.",
"title": ""
},
{
"docid": "4825ada359be4788a52f1fd616142a19",
"text": "Attachment theory is extended to pertain to developmental changes in the nature of children's attachments to parents and surrogate figures during the years beyond infancy, and to the nature of other affectional bonds throughout the life cycle. Various types of affectional bonds are examined in terms of the behavioral systems characteristic of each and the ways in which these systems interact. Specifically, the following are discussed: (a) the caregiving system that underlies parents' bonds to their children, and a comparison of these bonds with children's attachments to their parents; (b) sexual pair-bonds and their basic components entailing the reproductive, attachment, and caregiving systems; (c) friendships both in childhood and adulthood, the behavioral systems underlying them, and under what circumstances they may become enduring bonds; and (d) kinship bonds (other than those linking parents and their children) and why they may be especially enduring.",
"title": ""
},
{
"docid": "1927e46cd9a198b59b83dedd13881388",
"text": "Vehicle automation has been one of the fundamental applications within the field of intelligent transportation systems (ITS) since the start of ITS research in the mid-1980s. For most of this time, it has been generally viewed as a futuristic concept that is not close to being ready for deployment. However, recent development of “self-driving” cars and the announcement by car manufacturers of their deployment by 2020 show that this is becoming a reality. The ITS industry has already been focusing much of its attention on the concepts of “connected vehicles” (United States) or “cooperative ITS” (Europe). These concepts are based on communication of data among vehicles (V2V) and/or between vehicles and the infrastructure (V2I/I2V) to provide the information needed to implement ITS applications. The separate threads of automated vehicles and cooperative ITS have not yet been thoroughly woven together, but this will be a necessary step in the near future because the cooperative exchange of data will provide vital inputs to improve the performance and safety of the automation systems. Thus, it is important to start thinking about the cybersecurity implications of cooperative automated vehicle systems. In this paper, we investigate the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities. We analyze the threats on autonomous automated vehicles and cooperative automated vehicles. This analysis shows the need for considerably more redundancy than many have been expecting. We also raise awareness to generate discussion about these threats at this early stage in the development of vehicle automation systems.",
"title": ""
},
{
"docid": "81d82cd481ee3719c74d381205a4a8bb",
"text": "Consider a set of <italic>S</italic> of <italic>n</italic> data points in real <italic>d</italic>-dimensional space, R<supscrpt>d</supscrpt>, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess <italic>S</italic> into a data structure, so that given any query point <italic>q</italic><inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, is the closest point of S to <italic>q</italic> can be reported quickly. Given any positive real ε, data point <italic>p</italic> is a (1 +ε)-<italic>approximate nearest neighbor</italic> of <italic>q</italic> if its distance from <italic>q</italic> is within a factor of (1 + ε) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of <italic>n</italic> points in R<supscrpt>d</supscrpt> in <italic>O(dn</italic> log <italic>n</italic>) time and <italic>O(dn)</italic> space, so that given a query point <italic> q</italic> <inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, and ε > 0, a (1 + ε)-approximate nearest neighbor of <italic>q</italic> can be computed in <italic>O</italic>(<italic>c</italic><subscrpt><italic>d</italic>, ε</subscrpt> log <italic>n</italic>) time, where <italic>c<subscrpt>d,ε</subscrpt></italic>≤<italic>d</italic> <inline-equation> <f><fen lp=\"ceil\">1 + 6d/<g>e</g><rp post=\"ceil\"></fen></f></inline-equation>;<supscrpt>d</supscrpt> is a factor depending only on dimension and ε. In general, we show that given an integer <italic>k</italic> ≥ 1, (1 + ε)-approximations to the <italic>k</italic> nearest neighbors of <italic>q</italic> can be computed in additional <italic>O(kd</italic> log <italic>n</italic>) time.",
"title": ""
},
{
"docid": "8b7f55a5e86e9eac08b3a9cf21f378e9",
"text": "In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius’ goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one’s privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy.",
"title": ""
},
{
"docid": "638f7bf2f47895274995df166564ecc1",
"text": "In recent years, the video game market has embraced augmented reality video games, a class of video games that is set to grow as gaming technologies develop. Given the widespread use of video games among children and adolescents, the health implications of augmented reality technology must be closely examined. Augmented reality technology shows a potential for the promotion of healthy behaviors and social interaction among children. However, the full immersion and physical movement required in augmented reality video games may also put users at risk for physical and mental harm. Our review article and commentary emphasizes both the benefits and dangers of augmented reality video games for children and adolescents.",
"title": ""
},
{
"docid": "4bf6c59cdd91d60cf6802ae99d84c700",
"text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.",
"title": ""
},
{
"docid": "09380650b0af3851e19f18de4a2eacb2",
"text": "This paper presents a novel self-assembly modular robot (Sambot) that also shares characteristics with self-reconfigurable and self-assembly and swarm robots. Each Sambot can move autonomously and connect with the others. Multiple Sambot can be self-assembled to form a robotic structure, which can be reconfigured into different configurable robots and can locomote. A novel mechanical design is described to realize function of autonomous motion and docking. Introducing embedded mechatronics integrated technology, whole actuators, sensors, microprocessors, power and communication unit are embedded in the module. The Sambot is compact and flexble, the overall size is 80×80×102mm. The preliminary self-assembly and self-reconfiguration of Sambot is discussed, and several possible configurations consisting of multiple Sambot are designed in simulation environment. At last, the experiment of self-assembly and self-reconfiguration and locomotion of multiple Sambot has been implemented.",
"title": ""
},
{
"docid": "df7fc38a7c832273e884d2bad078ca93",
"text": "OBJECTIVES\nTo provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity.\n\n\nDESIGN\nCross-sectional, correlational and confirmatory factor analysis (CFA).\n\n\nMETHODS\nThe DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS).\n\n\nRESULTS\nThe best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.",
"title": ""
},
{
"docid": "7100fea85ba7c88f0281f11e7ddc04a9",
"text": "This paper reports the spoof surface plasmons polaritons (SSPPs) based multi-band bandpass filter. An efficient back to back transition from Quasi TEM mode of microstrip line to SSPP mode has been designed by etching a gradient corrugated structure on the metal strip; while keeping ground plane unaltered. SSPP wave is found to be highly confined within the teeth part of corrugation. Complementary split ring resonator has been etched in the ground plane to obtained multiband bandpass filter response. Excellent conversion from QTEM mode to SSPP mode has been observed.",
"title": ""
},
{
"docid": "7760a3074983f36e385299706ed9a927",
"text": "A reflectarray antenna monolithically integrated with 90 RF MEMS switches has been designed and fabricated to achieve switching of the main beam. Aperture coupled microstrip patch antenna (ACMPA) elements are used to form a 10 × 10 element reconfigurable reflectarray antenna operating at 26.5 GHz. The change in the progressive phase shift between the elements is obtained by adjusting the length of the open ended transmission lines in the elements with the RF MEMS switches. The reconfigurable reflectarray is monolithically fabricated with the RF MEMS switches in an area of 42.46 cm2 using an in-house surface micromachining and wafer bonding process. The measurement results show that the main beam can be switched between broadside and 40° in the H-plane at 26.5 GHz.",
"title": ""
},
{
"docid": "cf2018b0fc4e61202696386e2be48d93",
"text": "We carry out an analysis of typability of terms in ML. Our main result is that this problem is DEXPTIME-hard, where by DEXPTIME we mean DTIME(2n0(1)). This, together with the known exponential-time algorithm that solves the problem, yields the DEXPTIME-completeness result. This settles an open problem of P. Kanellakis and J. C. Mitchell.\nPart of our analysis is an algebraic characterization of ML typability in terms of a restricted form of semi-unification, which we identify as acyclic semi-unification. We prove that ML typability and acyclic semi-unification can be reduced to each other in polynomial time. We believe this result is of independent interest.",
"title": ""
},
{
"docid": "26eff65c0a642fd36d4c37560b8d5cda",
"text": "Dual-striplines are gaining popularity in the high-density computer system designs to save printed circuit board (PCB) cost and achieve smaller form factor. However, broad-side near-end/far-end crosstalk (NEXT/FEXT) between dualstriplines is a major concern that potentially has a significant impact to the signal integrity. In this paper, the broadside coupling between two differential pairs, and a differential pair and a single-ended trace in a dual-stripline design, is investigated and characterized. An innovative design methodology and routing strategy are proposed to effectively mitigate the broad-side coupling without additional routing space.",
"title": ""
},
{
"docid": "0014cb14c7acf1dfad67b3f8f50f69dc",
"text": "Latency to end-users and regulatory requirements push large companies to build data centers all around the world. The resulting data is “born” geographically distributed. On the other hand, many Machine Learning applications require a global view of such data in order to achieve the best results. These types of applications form a new class of learning problems, which we call Geo-Distributed Machine Learning (GDML). Such applications need to cope with: 1) scarce and expensive cross-data center bandwidth, and 2) growing privacy concerns that are pushing for stricter data sovereignty regulations. Current solutions to learning from geo-distributed data sources revolve around the idea of first centralizing the data in one data center, and then training locally. As Machine Learning algorithms are communication-intensive, the cost of centralizing the data is thought to be offset by the lower cost of intra-data center communication during training. In this work, we show that the current centralized practice can be far from optimal, and propose a system architecture for doing geo-distributed training. Furthermore, we argue that the geo-distributed approach is structurally more amenable to dealing with regulatory constraints, as raw data never leaves the source data center. Our empirical evaluation on three real datasets confirms the general validity of our approach, and shows that GDML is not only possible but also advisable in many scenarios.",
"title": ""
},
{
"docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2",
"text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"title": ""
},
{
"docid": "bd6115cbcf62434f38ca4b43480b7c5a",
"text": "Most existing person re-identification methods focus on finding similarities between persons between pairs of cameras (camera pairwise re-identification) without explicitly maintaining consistency of the results across the network. This may lead to infeasible associations when results from different camera pairs are combined. In this paper, we propose a network consistent re-identification (NCR) framework, which is formulated as an optimization problem that not only maintains consistency in re-identification results across the network, but also improves the camera pairwise re-identification performance between all the individual camera pairs. This can be solved as a binary integer programing problem, leading to a globally optimal solution. We also extend the proposed approach to the more general case where all persons may not be present in every camera. Using two benchmark datasets, we validate our approach and compare against state-of-the-art methods.",
"title": ""
},
{
"docid": "9e8a1a70af4e52de46d773cec02f99a7",
"text": "In this paper, we build a corpus of tweets from Twitter annotated with keywords using crowdsourcing methods. We identify key differences between this domain and the work performed on other domains, such as news, which makes existing approaches for automatic keyword extraction not generalize well on Twitter datasets. These datasets include the small amount of content in each tweet, the frequent usage of lexical variants and the high variance of the cardinality of keywords present in each tweet. We propose methods for addressing these issues, which leads to solid improvements on this dataset for this task.",
"title": ""
},
{
"docid": "e40eb32613ed3077177d61ac14e82413",
"text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.",
"title": ""
},
{
"docid": "6c1a3792b9f92a4a1abd2135996c5419",
"text": "Artificial neural networks (ANNs) have been applied in many areas successfully because of their ability to learn, ease of implementation and fast real-time operation. In this research, there are proposed two algorithms. The first is cellular neural network (CNN) with noise level estimation. While the second is modify cellular neural network with noise level estimation. The proposed CNN modification is by adding the Rossler chaos to the CNN fed. Noise level algorithm were used to image noise removal approach in order to get a good image denoising processing with high quality image visual and statistical measures. The results of the proposed system show that the combination of chaos CNN with noise level estimation gives acceptable PSNR and RMSE with a best quality visual vision and small computational time.",
"title": ""
}
] |
scidocsrr
|
18148f5dc3b0b61ca640477c84dcd70e
|
Algorithms for Quantum Computers
|
[
{
"docid": "8eac34d73a2bcb4fa98793499d193067",
"text": "We review here the recent success in quantum annealing, i.e., optimization of the cost or energy functions of complex systems utilizing quantum fluctuations. The concept is introduced in successive steps through the studies of mapping of such computationally hard problems to the classical spin glass problems. The quantum spin glass problems arise with the introduction of quantum fluctuations, and the annealing behavior of the systems as these fluctuations are reduced slowly to zero. This provides a general framework for realizing analog quantum computation.",
"title": ""
}
] |
[
{
"docid": "6d825778d5d2cb935aab35c60482a267",
"text": "As the workforce ages rapidly in industrialized countries, a phenomenon known as the graying of the workforce, new challenges arise for firms as they have to juggle this dramatic demographical change (Trend 1) in conjunction with the proliferation of increasingly modern information and communication technologies (ICTs) (Trend 2). Although these two important workplace trends are pervasive, their interdependencies have remained largely unexplored. While Information Systems (IS) research has established the pertinence of age to IS phenomena from an empirical perspective, it has tended to model the concept merely as a control variable with limited understanding of its conceptual nature. In fact, even the few IS studies that used the concept of age as a substantive variable have mostly relied on stereotypical accounts alone to justify their age-related hypotheses. Further, most of these studies have examined the role of age in the same phenomenon (i.e., initial adoption of ICTs), implying a marked lack of diversity with respect to the phenomena under investigation. Overall, IS research has yielded only limited insight into the role of age in phenomena involving ICTs. In this essay, we argue for the importance of studying agerelated impacts more carefully and across various IS phenomena, and we enable such research by providing a research agenda that IS scholars can use. In doing so, we hope that future research will further both our empirical and conceptual understanding of the managerial challenges arising from the interplay of a graying workforce and rapidly evolving ICTs. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "514afc7846a1d9c3ce60c2ae392b3e43",
"text": "Scientific workflows facilitate automation, reuse, and reproducibility of scientific data management and analysis tasks. Scientific workflows are often modeled as dataflow networks, chaining together processing components (called actors) that query, transform, analyse, and visualize scientific datasets. Semantic annotations relate data and actor schemas with conceptual information from a shared ontology, to support scientific workflow design, discovery, reuse, and validation in the presence of thousands of potentially useful actors and datasets. However, the creation of semantic annotations is complex and time-consuming. We present a calculus and two inference algorithms to automatically propagate semantic annotations through workflow actors described by relational queries. Given an input annotation α and a query q, forward propagation computes an output annotation α′; conversely, backward propagation infers α from q and α′.",
"title": ""
},
{
"docid": "c7f0a749e38b3b7eba871fca80df9464",
"text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.",
"title": ""
},
{
"docid": "8be33fad66b25a9d3a4b05dbfc1aac5d",
"text": "A question-answering system needs to be able to reason about unobserved causes in order to answer questions of the sort that people face in everyday conversations. Recent neural network models that incorporate explicit memory and attention mechanisms have taken steps towards this capability. However, these models have not been tested in scenarios for which reasoning about the unobservable mental states of other agents is necessary to answer a question. We propose a new set of tasks inspired by the well-known false-belief test to examine how a recent question-answering model performs in situations that require reasoning about latent mental states. We find that the model is only successful when the training and test data bear substantial similarity, as it memorizes how to answer specific questions and cannot reason about the causal relationship between actions and latent mental states. We introduce an extension to the model that explicitly simulates the mental representations of different participants in a reasoning task, and show that this capacity increases the model’s performance on our theory of mind test.",
"title": ""
},
{
"docid": "8b57c1f4c865c0a414b2e919d19959ce",
"text": "A microstrip HPF with sharp attenuation by using cross-coupling is proposed in this paper. The HPF consists of parallel plate- and gap type- capacitors and inductor lines. The one block of the HPF has two sections of a constant K filter in the bridge T configuration. Thus the one block HPF is first coarsely designed and the performance is optimized by circuit simulator. With the gap capacitor adjusted the proposed HPF illustrates the sharp attenuation characteristics near the cut-off frequency made by cross-coupling between the inductor lines. In order to improve the stopband performance, the cascaded two block HPF is examined. Its measured results show the good agreement with the simulated ones giving the sharper attenuation slope.",
"title": ""
},
{
"docid": "288f8a2dab0c32f85c313f5a145e47a5",
"text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input",
"title": ""
},
{
"docid": "3cb0bddb1ed916cffdff3624e61d49cd",
"text": "Thh paper presents a new method for computing the configuration-space map of obstacles that is used in motion-planning algorithms. The method derives from the observation that, when the robot is a rigid object that can only translate, the configuration space is a convolution of the workspace and the robot. This convolution is computed with the use of the Fast Fourier Transform (FFT) algorithm. The method is particularly promising for workspaces with many andlor complicated obstacles, or when the shape of the robot is not simple. It is an inherently parallel method that can significantly benefit from existing experience and hardware on the FFT.",
"title": ""
},
{
"docid": "6989ae9a7e6be738d0d2e8261251a842",
"text": "A single-feed reconfigurable square-ring patch antenna with pattern diversity is presented. The antenna structure has four shorting walls placed respectively at each edge of the square-ring patch, in which two shorting walls are directly connected to the patch and the others are connected to the patch via pin diodes. By controlling the states of the pin diodes, the antenna can be operated at two different modes: monopolar plat-patch and normal patch modes; moreover, the 10 dB impedance bandwidths of the two modes are overlapped. Consequently, the proposed antenna allows its radiation pattern to be switched electrically between conical and broadside radiations at a fixed frequency. Detailed design considerations of the proposed antenna are described. Experimental and simulated results are also shown and discussed",
"title": ""
},
{
"docid": "a408e25435dded29744cf2af0f7da1e5",
"text": "Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.",
"title": ""
},
{
"docid": "a67f7593ea049be1e2785108b6181f7d",
"text": "This paper describes torque characteristics of the interior permanent magnet synchronous motor (IPMSM) using the inexpensive ferrite magnets. IPMSM model used in this study has the spoke and the axial type magnets in the rotor, and torque characteristics are analyzed by the three-dimensional finite element method (3D-FEM). As a result, torque characteristics can be improved by using both the spoke type magnets and the axial type magnets in the rotor.",
"title": ""
},
{
"docid": "241542e915e51ce1505c7d24641e4e0b",
"text": "Over the past decade, research has increased our understanding of the effects of physical activity at opposite ends of the spectrum. Sedentary behaviour—too much sitting—has been shown to increase risk of chronic disease, particularly diabetes and cardiovascular disease. There is now a clear need to reduce prolonged sitting. Secondly, evidence on the potential of high intensity interval training inmanaging the same chronic diseases, as well as reducing indices of cardiometabolic risk in healthy adults, has emerged. This vigorous training typically comprises multiple 3-4 minute bouts of high intensity exercise interspersed with several minutes of low intensity recovery, three times a week. Between these two extremes of the activity spectrum is the mainstream public health recommendation for aerobic exercise, which is similar in many developed countries. The suggested target for older adults (≥65) is the same as for other adults (18-64): 150 minutes a week of moderate intensity activity in bouts of 10 minutes or more. It is often expressed as 30 minutes of brisk walking or equivalent activity five days a week, although 75 minutes of vigorous intensity activity spread across the week, or a combination of moderate and vigorous activity are sometimes suggested. Physical activity to improve strength should also be done at least two days a week. The 150 minute target is widely disseminated to health professionals and the public. However, many people, especially in older age groups, find it hard to achieve this level of activity. We argue that when advising patients on exercise doctors should encourage people to increase their level of activity by small amounts rather than focus on the recommended levels. The 150 minute target, although warranted, may overshadow other less concrete elements of guidelines. These include finding ways to do more lower intensity lifestyle activity. As people get older, activity may become more relevant for sustaining the strength, flexibility, and balance required for independent living in addition to the strong associations with hypertension, coronary heart disease, stroke, diabetes, breast cancer, and colon cancer. Observational data have confirmed associations between increased physical activity and reduction in musculoskeletal conditions such as arthritis, osteoporosis, and sarcopenia, and better cognitive acuity and mental health. Although these links may be modest and some lack evidence of causality, they may provide sufficient incentives for many people to be more active. Research into physical activity",
"title": ""
},
{
"docid": "ca19a74fde1b9e3a0ab76995de8b0f36",
"text": "Sensors on (or attached to) mobile phones can enable attractive sensing applications in different domains, such as environmental monitoring, social networking, healthcare, transportation, etc. We introduce a new concept, sensing as a service (S2aaS), i.e., providing sensing services using mobile phones via a cloud computing system. An S2aaS cloud needs to meet the following requirements: 1) it must be able to support various mobile phone sensing applications on different smartphone platforms; 2) it must be energy-efficient; and 3) it must have effective incentive mechanisms that can be used to attract mobile users to participate in sensing activities. In this vision paper, we identify unique challenges of designing and implementing an S2aaS cloud, review existing systems and methods, present viable solutions, and point out future research directions.",
"title": ""
},
{
"docid": "361e874cccb263b202155ef92e502af3",
"text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.",
"title": ""
},
{
"docid": "88660d823f1c20cf0b75b665c66af696",
"text": "A pectus index can be derived from dividing the transverse diameter of the chest by the anterior-posterior diameter on a simple CT scan. In a preliminary report, all patients who required operative correction for pectus excavatum had a pectus index greater than 3.25 while matched normal controls were all less than 3.25. A simple CT scan may be a useful adjunct in objective evaluation of children and teenagers for surgery of pectus excavatum.",
"title": ""
},
{
"docid": "65bea826c88408b87ce2e2c17944835c",
"text": "The broad spectrum of clinical signs in canine cutaneous epitheliotropic T-cell lymphoma mimics many inflammatory skin diseases and is a diagnostic challenge. A 13-year-old-male castrated golden retriever crossbred dog presented with multifocal flaccid bullae evolving into deep erosions. A shearing force applied to the skin at the periphery of the erosions caused the epidermis to further slide off the dermis suggesting intraepidermal or subepidermal separation. Systemic signs consisted of profound weight loss and marked respiratory distress. Histologically, the superficial and deep dermis were infiltrated by large, CD3-positive neoplastic lymphocytes and mild epitheliotropism involved the deep epidermis, hair follicle walls and epitrichial sweat glands. There was partial loss of the stratum basale. Bullous lesions consisted of large dermoepidermal and intraepidermal clefts that contained loose accumulations of neutrophils mixed with fewer neoplastic cells in proteinaceous fluid. The lifted epidermis was often devitalized and bordered by hydropic degeneration and partial epidermal collapse. Similar neoplastic lymphocytes formed small masses in the lungs associated with broncho-invasion. Clonal rearrangement analysis of antigen receptor genes in samples from skin and lung lesions using primers specific for canine T-cell receptor gamma (TCRgamma) produced a single-sized amplicon of identical sequence, indicating that both lesions resulted from the expansion of the same neoplastic T-cell population. Macroscopic vesiculobullous lesions with devitalization of the lesional epidermis should be included in the broad spectrum of clinical signs presented by canine cutaneous epitheliotropic T-cell lymphoma.",
"title": ""
},
{
"docid": "ead6596d7f368da713f36f572c79bf94",
"text": "The total variation (TV) model is a classical and effective model in image denoising, but the weighted total variation (WTV) model has not attracted much attention. In this paper, we propose a new constrained WTV model for image denoising. A fast denoising dual method for the new constrained WTV model is also proposed. To achieve this task, we combines the well known gradient projection (GP) and the fast gradient projection (FGP) methods on the dual approach for the image denoising problem. Experimental results show that the proposed method outperforms currently known GP andFGP methods, and canbe applicable to both the isotropic and anisotropic WTV functions.",
"title": ""
},
{
"docid": "89aa13fe76bf48c982e44b03acb0dd3d",
"text": "Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent’s performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.",
"title": ""
},
{
"docid": "04fe2706a8da54365e4125867613748b",
"text": "We consider a sequence of multinomial data for which the probabilities associated with the categories are subject to abrupt changes of unknown magnitudes at unknown locations. When the number of categories is comparable to or even larger than the number of subjects allocated to these categories, conventional methods such as the classical Pearson’s chi-squared test and the deviance test may not work well. Motivated by high-dimensional homogeneity tests, we propose a novel change-point detection procedure that allows the number of categories to tend to infinity. The null distribution of our test statistic is asymptotically normal and the test performs well with finite samples. The number of change-points is determined by minimizing a penalized objective function based on segmentation, and the locations of the change-points are estimated by minimizing the objective function with the dynamic programming algorithm. Under some mild conditions, the consistency of the estimators of multiple change-points is established. Simulation studies show that the proposed method performs satisfactorily for identifying change-points in terms of power and estimation accuracy, and it is illustrated with an analysis of a real data set.",
"title": ""
},
{
"docid": "2cebd2fd12160d2a3a541989293f10be",
"text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.",
"title": ""
},
{
"docid": "6f94a57f7ae1a818c3bd5e7f6f2cea0f",
"text": "We propose a novel hybrid metric learning approach to combine multiple heterogenous statistics for robust image set classification. Specifically, we represent each set with multiple statistics – mean, covariance matrix and Gaussian distribution, which generally complement each other for set modeling. However, it is not trivial to fuse them since the mean vector with d-dimension often lies in Euclidean space R, whereas the covariance matrix typically resides on Riemannian manifold Sym+d . Besides, according to information geometry, the space of Gaussian distribution can be embedded into another Riemannian manifold Sym+d+1. To fuse these statistics from heterogeneous spaces, we propose a Hybrid Euclidean-and-Riemannian Metric Learning (HERML) method to exploit both Euclidean and Riemannian metrics for embedding their original spaces into high dimensional Hilbert spaces and then jointly learn hybrid metrics with discriminant constraint. The proposed method is evaluated on two tasks: set-based object categorization and video-based face recognition. Extensive experimental results demonstrate that our method has a clear superiority over the state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
10e2cbfa32f8e2e6759561c28dfd1938
|
Constructing Thai Opinion Mining Resource: A Case Study on Hotel Reviews
|
[
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "b1bb5751e409d0fe44754624a4145e70",
"text": "Capacity planning determines the optimal product mix based on the available tool sets and allocates production capacity according to the forecasted demands for the next few months. MaxIt is the previous capacity planning system for Intel's Flash Product Group (FPG) Assembly & Test Manufacturing (ATM). It only applied to single product family scenarios with simple process routing. However, new Celluar Handhold Group (CHG) products need to go through flexible and reentrant ATM routes. In this paper, we introduce MaxItPlus, which is an enhanced MaxIt using MILP (mixed integer linear programming) to conduct capacity planning of multiple product families with mixed process routes in a multifactory ATM environment. We also present the detailed mathematical formulation, the system architecture, and implementation results. The project will help Intel global Flash ATM to achieve a single and efficient capacity planning process for all FPG and CHG products and gain $10 M in marginal profit (as determined by the finance department)",
"title": ""
},
{
"docid": "dd11d7291d8f0ee2313b74dc5498acfa",
"text": "Going further At this point, the theorem is proved. While for every summarizer σ there exists at least one tuple (θ,O), in practice there exist multiple tuples, and the one proposed by the proof would not be useful to rank models of summary quality. We can formulate an algorithm which constructs θ from σ and which yields an ordering of candidate summaries. Let σD\\{s1,...,sn} be the summarizer σ which still uses D as initial document collection, but which is not allowed to output sentences from {s1, . . . , sn} in the final summary. For a given summary S to score, let Rσ,S be the smallest set of sentences {s1, . . . , sn} that one has to remove fromD such that σD\\R outputs S. Then the definition of θσ follows:",
"title": ""
},
{
"docid": "11b20602fc9d6e97a5bcc857da7902d0",
"text": "This research investigates the Quality of Service (QoS) interaction at the edge of differentiated service (DiffServ) domain, denoted by video gateway (VG). VG is responsible for coordinating the QoS mapping between video applications and DiffServ enabled network. To accomplish the goal of achieving economical and high-quality end-to-end video streaming, which utilizes its awareness of relative service differentiation, the proposed QoS control framework includes the following three components: 1) the relative priority based indexing and categorization of streaming video content at sender, 2) the differentiated QoS levels with load variation in DiffServ networks, and 3) the feedforward and feedback mechanisms assisting QoS mapping of categorized index to DS level at the proposed VG. Especially, we focus on building a framework for dynamic QoS mapping, which intends to overcome both the QoS demand variations of CM applications (e.g., varying priorities from aggregated/categorized packets) and the QoS supply variations of DiffServ network (e.g., varying loss/delay due to fluctuating network loads). Thus, with the proposed QoS controls in both feedforward and feedback fashion, enhanced quality provisioning for CM applications (especially video streaming) is investigated under the given pricing model (e.g., DS level differentiated price/packet).",
"title": ""
},
{
"docid": "c4f9c924963cadc658ad9c97560ea252",
"text": "A novel broadband circularly polarized (CP) antenna is proposed. The operating principle of this CP antenna is different from those of conventional CP antennas. An off-center-fed dipole is introduced to achieve the 90° phase difference required for circular polarization. The new CP antenna consists of two off-center-fed dipoles. Combining such two new CP antennas leads to a bandwidth enhancement for circular polarization. A T-shaped microstrip probe is used to excite the broadband CP antenna, featuring a simple planar configuration. It is shown that the new broadband CP antenna achieves an axial ratio (AR) bandwidth of 55% (1.69-3.0 GHz) for AR <; 3 dB, an impedance bandwidth of 60% (1.7-3.14 GHz) for return loss (RL) > 15 dB, and an antenna gain of 6-9 dBi. The new mechanism for circular polarization is described and an experimental verification is presented.",
"title": ""
},
{
"docid": "5268fd63c99f43d1a155c0078b2e5df5",
"text": "With Docker gaining widespread popularity in the recent years, the container scheduler becomes a crucial role for the exploding containerized applications and services. In this work, the container host energy conservation, the container image pulling costs from the image registry to the container hosts and the workload network transition costs from the clients to the container hosts are evaluated in combination. By modeling the scheduling problem as an integer linear programming, an effective and adaptive scheduler is proposed. Impressive cost savings were achieved compared to Docker Swarm scheduler. Moreover, it can be easily integrated into the open-source container orchestration frameworks.",
"title": ""
},
{
"docid": "4645d0d7b1dfae80657f75d3751ef72a",
"text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.",
"title": ""
},
{
"docid": "203312195c3df688a594d0c05be72b5a",
"text": "Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.",
"title": ""
},
{
"docid": "4ab58e47f1f523ba3f48c37bc918696e",
"text": "In this work, we design a neural network for recognizing emotions in speech, using the standard IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting highlevel features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. Applying techniques of data augmentation, layerwise learning rate adjustment and batch normalization, we obtain highly competitive results, with 64.5% weighted accuracy and 61.7% unweighted accuracy on four emotions. Moreover, we show that the model performance is strongly correlated with the labeling confidence, which highlights a fundamental difficulty in emotion recognition.",
"title": ""
},
{
"docid": "858a5ed092f02d057437885ad1387c9f",
"text": "The current state-of-the-art singledocument summarization method generates a summary by solving a Tree Knapsack Problem (TKP), which is the problem of finding the optimal rooted subtree of the dependency-based discourse tree (DEP-DT) of a document. We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT). However, there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT. To improve the ROUGE score, we propose a novel discourse parser that directly generates the DEP-DT. The evaluation results showed that the TKP with our parser outperformed that with the state-of-the-art RST-DT parser, and achieved almost equivalent ROUGE scores to the TKP with the gold DEP-DT.",
"title": ""
},
{
"docid": "ef95b5b3a0ff0ab0907565305d597a9d",
"text": "Control flow defenses against ROP either use strict, expensive, but strong protection against redirected RET instructions with shadow stacks, or much faster but weaker protections without. In this work we study the inherent overheads of shadow stack schemes. We find that the overhead is roughly 10% for a traditional shadow stack. We then design a new scheme, the parallel shadow stack, and show that its performance cost is significantly less: 3.5%. Our measurements suggest it will not be easy to improve performance on current x86 processors further, due to inherent costs associated with RET and memory load/store instructions. We conclude with a discussion of the design decisions in our shadow stack instrumentation, and possible lighter-weight alternatives.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "9d2a73c8eac64ed2e1af58a5883229c3",
"text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.",
"title": ""
},
{
"docid": "236d3cb8566d4ae72add4a4b8b1f1fcc",
"text": "SAP HANA is a pioneering, and one of the best performing, data platform designed from the grounds up to heavily exploit modern hardware capabilities, including SIMD, and large memory and CPU footprints. As a comprehensive data management solution, SAP HANA supports the complete data life cycle encompassing modeling, provisioning, and consumption. This extended abstract outlines the vision and planned next step of the SAP HANA evolution growing from a core data platform into an innovative enterprise application platform as the foundation for current as well as novel business applications in both on-premise and on-demand scenarios. We argue that only a holistic system design rigorously applying co-design at di↵erent levels may yield a highly optimized and sustainable platform for modern enterprise applications. 1. THE BEGINNING: SAP HANA DATA PLATFORM A comprehensive data management solution has become one of the most critical assets in large enterprises. Modern data management solutions must cover a wide spectrum of additional data structures ranging from simple keyvalues models to complex graph structured data sets and document-centric data stores. Complex query and manipulation patterns are issued against the database reflecting the algorithmic side of complex enterprise applications. Additionally, data consumption activities with analytical query patterns are no longer reserved for decision makers or specialized data scientists but are increasingly becoming an integral part of complex operational business processes requiring support for analytical as well as transactional workloads managed within the same system [4]. Dealing with these challenges [5] demanded a complete re-thinking of traditional database architectures and data management approaches now made possible by advances in hardware architectures. The development of SAP HANA accepted this challenge head on and started a new generation Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were invited to present their results at The 39th International Conference on Very Large Data Bases, August 26th 30th 2013, Riva del Garda, Trento, Italy. Proceedings of the VLDB Endowment, Vol. 6, No. 11 Copyright 2013 VLDB Endowment 2150-8097/13/09... $ 10.00. Figure 1: The SAP HANA platform of database system design. The SAP HANA database server now comprises a centrally, and tightly, orchestrated collection of di↵erent processing capabilities, e.g., an in-memory columnar relational store, a graph engine, native support for text processing, comprehensive spatial support, etc., all running within a single system environment and, therefore, within a single transactional sphere of control without the need for data replication and synchronization [2]. Secondly, and most importantly, SAP HANA has triggered a major shift in the database industry from the classical disk-centric database system design to a ground breaking main-memory centric system design [3]. The mainstream availability of very large main memory and CPU core footprints within single compute nodes, combined with SIMD architectures and sophisticated cluster systems based on high speed interconnects, was and remains, the central design guideline of the SAP HANA database server. SAP HANA was the first commercial system to systematically reflect, and exploit, the shift in memory hierarchies and CPU architectures in order to optimize data structures and access paths. As a result, SAP HANA has yielded orders of magnitude performance gains thereby opening up completely novel application opportunities. Most of the core design advances behind SAP HANA are now finding their way into mainstream database system research and development, thereby reflecting its pioneering role. As a foundational tenet, we see rigorous application of Hardware/Database co-design principles as the main success factor to systematically exploit the underlying hardware platform: Literally every core SAP HANA data structure and routine has been systematically inspected, redesigned",
"title": ""
},
{
"docid": "23583b155fc8ec3301cfef805f568e57",
"text": "We address the problem of covering an environment with robots equipped with sensors. The robots are heterogeneous in that the sensor footprints are different. Our work uses the location optimization framework in with three significant extensions. First, we consider robots with different sensor footprints, allowing, for example, aerial and ground vehicles to collaborate. We allow for finite size robots which enables implementation on real robotic systems. Lastly, we extend the previous work allowing for deployment in non convex environments.",
"title": ""
},
{
"docid": "cf0b98dfd188b7612577c975e08b0c92",
"text": "Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.",
"title": ""
},
{
"docid": "cbc6bd586889561cc38696f758ad97d2",
"text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.",
"title": ""
},
{
"docid": "88285b058e6b93c2b31e9b1b8d6b657e",
"text": "Corporate incubators for technology development are a recent phenomenon whose functioning and implications are not yet well understood. The resource-based view can offer an explanatory model on how corporate incubators function as specialised corporate units that hatch new businesses. While tangible resources, such as the financial, physical and even explicit knowledge flow, are all visible, and therefore easy to measure, intangible resources such as tacit knowledge and branding flow are harder to detect and localise. Managing the resource flow requires the initial allocation of resources to the corporate incubator during its set-up as well as a continuous resource flow to the technology venture and, during the harvest phase, also from it. Two levels of analysis need to be distinguished: (1) the resource flow between the corporate incubator and the technology venture and (2) the resource flow interface between the corporate incubator and the technology venture. Our empirical findings are based on two phases: First, in-depth case studies of 22 companies through 47 semi-structured interviews that were conducted with managers of large technology-intensive corporations’ corporate incubators in Europe and the U.S., and second, an analysis of the European Commission’s benchmarking survey of 77 incubators.",
"title": ""
},
{
"docid": "e6e6eb1f1c0613a291c62064144ff0ba",
"text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.",
"title": ""
},
{
"docid": "4b3c69e446dcf1d237db63eb4f106dd7",
"text": "Creating linguistic annotations requires more than just a reliable annotation scheme. Annotation can be a complex endeavour potentially involving many people, stages, and tools. This chapter outlines the process of creating end-toend linguistic annotations, identifying specific tasks that researchers often perform. Because tool support is so central to achieving high quality, reusable annotations with low cost, the focus is on identifying capabilities that are necessary or useful for annotation tools, as well as common problems these tools present that reduce their utility. Although examples of specific tools are provided in many cases, this chapter concentrates more on abstract capabilities and problems because new tools appear continuously, while old tools disappear into disuse or disrepair. The two core capabilities tools must have are support for the chosen annotation scheme and the ability to work on the language under study. Additional capabilities are organized into three categories: those that are widely provided; those that often useful but found in only a few tools; and those that have as yet little or no available tool support. 1 Annotation: More than just a scheme Creating manually annotated linguistic corpora requires more than just a reliable annotation scheme. A reliable scheme, of course, is a central ingredient to successful annotation; but even the most carefully designed scheme will not answer a number of practical questions about how to actually create the annotations, progressing from raw linguistic data to annotated linguistic artifacts that can be used to answer interesting questions or do interesting things. Annotation, especially high-quality annotation of large language datasets, can be a complex process potentially involving many people, stages, and tools, and the scheme only specifies the conceptual content of the annotation. By way of example, the following questions are relevant to a text annotation project and are not answered by a scheme: How should linguistic artifacts be prepared? Will the originals be annotated directly, or will their textual content be extracted into separate files for annotation? In the latter case, what layout or formatting will be kept (lines, paragraphs page breaks, section headings, highlighted text)? What file format will be used? How will typographical errors be handled? Will typos be ignored, changed in the original, changed in extracted content, or encoded as an additional annotation? Who will be allowed to make corrections: the annotators themselves, adjudicators, or perhaps only the project manager? How will annotators be provided artifacts to annotate? How will the order of annotation be specified (if at all), and how will this order be enforced? How will the project manager ensure that each document is annotated the appropriate number of times (e.g., by two different people for double annotation). What inter-annotator agreement measures (IAAs) will be measured, and when? Will IAAs be measured continuously, on batches, or on other subsets of the corpus? How will their measurement at the right time be enforced? Will IAAs be used to track annotator training? If so, what level of IAA will be considered to indicate that training has succeeded? These questions are only a small selection of those that arise during the practical process of conducting annotation. The first goal of this chapter is to give an overview of the process of annotation from start to finish, pointing out these sorts of questions and subtasks for each stage. We will start with a known conceptual framework for the annotation process, the MATTER framework (Pustejovsky & Stubbs, 2013) and expand upon it. Our expanded framework is not guaranteed to be complete, but it will give a reader a very strong flavor of the kind of issues that arise so that they can start to anticipate them in the design of their own annotation project. The second goal is to explore the capabilities required by annotation tools. Tool support is central to effecting high quality, reusable annotations with low cost. The focus will be on identifying capabilities that are necessary or useful for annotation tools. Again, this list will not be exhaustive but it will be fairly representative, as the majority of it was generated by surveying a number of annotation experts about their opinions of available tools. Also listed are common problems that reduce tool utility (gathered during the same survey). Although specific examples of tools will be provided in many cases, the focus will be on more abstract capabilities and problems because new tools appear all the time while old tools disappear into disuse or disrepair. Before beginning, it is well to first introduce a few terms. By linguistic artifact, or just artifact, we mean the object to which annotations are being applied. These could be newspaper articles, web pages, novels, poems, TV 2 Mark A. Finlayson and Tomaž Erjavec shows, radio broadcasts, images, movies, or something else that involves language being captured in a semipermanent form. When we use the term document we will generally mean textual linguistic artifacts such as books, articles, transcripts, and the like. By annotation scheme, or just scheme, we follow the terminology as given in the early chapters of this volume, where a scheme comprises a linguistic theory, a derived model of a phenomenon of interest, a specification that defines the actual physical format of the annotation, and the guidelines that explain to an annotator how to apply the specification to linguistic artifacts. (citation to Chapter III by Ide et al.) By computing platform, or just platform, we mean any computational system on which an annotation tool can be run; classically this has meant personal computers, either desktops or laptops, but recently the range of potential computing platforms has expanded dramatically, to include on the one hand things like web browsers and mobile devices, and, on the other, internet-connected annotation servers and service oriented architectures. Choice of computing platform is driven by many things, including the identity of the annotators and their level of sophistication. We will speak of the annotation process or just process within an annotation project. By process, we mean any procedure or activity, at any level of granularity, involved in the production of annotation. This potentially encompasses everything from generating the initial idea, applying the annotation to the artifacts, to archiving the annotated documents for distribution. Although traditionally not considered part of annotation per se, we might also include here writing academic papers about the results of the annotation, as these activities also sometimes require annotation-focused tool support. We will also speak of annotation tools. By tool we mean any piece of computer software that runs on a computing platform that can be used to implement or carry out a process in the annotation project. Classically conceived annotation tools include software such as the Alembic workbench, Callisto, or brat (Day et al., 1997; Day, McHenry, Kozierok, & Riek, 2004; Stenetorp et al., 2012), but tools can also include software like Microsoft Word or Excel, Apache Tomcat (to run web servers), Subversion or Git (for document revision control), or mobile applications (apps). Tools usually have user interfaces (UIs), but they are not always graphical, fully functional, or even all that helpful. There is a useful distinction between a tool and a component (also called an NLP component, or an NLP algorithm; in UIMA (Apache, 2014) called an annotator), which are pieces of software that are intended to be integrated as libraries into software and can often be strung together in annotation pipelines for applying automatic annotations to linguistic artifacts. Software like tokenizers, part of speech taggers, parsers (Manning et al., 2014), multiword expression detectors (Kulkarni & Finlayson, 2011) or coreference resolvers (Pradhan et al., 2011) are all components. Sometimes the distinction between a tool and a component is not especially clear cut, but it is a useful one nonetheless. The main reason a chapter like this one is needed is that there is no one tool that does everything. There are multiple stages and tasks within every annotation project, typically requiring some degree of customization, and no tool does it all. That is why one needs multiple tools in annotation, and why a detailed consideration of the tool capabilities and problems is needed. 2 Overview of the Annotation Process The first step in an annotation project is, naturally, defining the scheme, but many other tasks must be executed to go from an annotation scheme to an actual set of cleanly annotated files useful for other tasks. 2.1 MATTER & MAMA A good starting place for organizing our conception of the various stages of the process of annotation is the MATTER cycle, proposed by Pustejovsky & Stubbs (2013). This framework outlines six major stages to annotation, corresponding to each letter in the word, defined as follows: M = Model: In this stage, the first of the process, the project leaders set up the conceptual framework for the project. Subtasks may include: Search background work to understand existing theories of the phenomena Create or adopt an abstract model of the phenomenon Define an annotation scheme based on the model Overview of Annotation Creation: Processes & Tools 3 Search libraries, the web, and online repositories for potential linguistic artifacts Create corpus artifacts if appropriate artifacts do not exist Measure overall characteristics of artifacts to ground estimates of representativeness and balance Collect the artifacts on which the annotation will be performed Track artifact licenses Measure various statistics of the collected corpus Choose an annotation specification language Build an annotation specification that disti",
"title": ""
},
{
"docid": "9d5c258e4a2d315d3e462ab333f3a6df",
"text": "The modern smart phone and car concepts provide a fertile ground for new location-aware applications, ranging from traffic management to social services. While the functionality is partly implemented at the mobile terminal, there is a rising need for efficient backend processing of high-volume, high update rate location streams. It is in this environment that geofencing, the detection of objects traversing virtual fences, is becoming a universal primitive required by an ever-growing number of applications. To satisfy the functionality and performance requirements of large-scale geofencing applications, we present in this work a backend system for indexing massive quantities of mobile objects and geofences. Our system runs on a cluster of servers, achieving a throughput of location updates that scales linearly with number of machines. The key ingredients to achieve a high performance are a specialized spatial index, a dynamic caching mechanism, and a load-sharing principle that reduces communication overhead to a minimum and enables a shared-nothing architecture. The throughput of the spatial index as well as the performance of the overall system are demonstrated by experiments using simulations of large-scale geofencing applications.",
"title": ""
}
] |
scidocsrr
|
913cbf1c706a47094aabf3fc2f764150
|
The Impacts of Social Media on Bitcoin Performance
|
[
{
"docid": "c02d207ed8606165e078de53a03bf608",
"text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: anand.bodapati@anderson.ucla.edu), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: rbucklin@anderson.ucla.edu), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*",
"title": ""
}
] |
[
{
"docid": "7e40c98b9760e1f47a0140afae567b7f",
"text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "b78f1e6a5e93c1ad394b1cade293829f",
"text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing",
"title": ""
},
{
"docid": "fb31ead676acdd048d699ddfb4ddd17a",
"text": "Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.",
"title": ""
},
{
"docid": "8e654ace264f8062caee76b0a306738c",
"text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.",
"title": ""
},
{
"docid": "06672f6316878c80258ad53988a7e953",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "fe57e844c12f7392bdd29a2e2396fc50",
"text": "With the help of modern information communication technology, mobile banking as a new type of financial services carrier can provide efficient and effective financial services for clients. Compare with Internet banking, mobile banking is more secure and user friendly. The implementation of wireless communication technologies may result in more complicated information security problems. Based on the principles of information security, this paper presented issues of information security of mobile banking and discussed the security protection measures such as: encryption technology, identity authentication, digital signature, WPKI technology.",
"title": ""
},
{
"docid": "64ba4467dc4495c6828f2322e8f415f2",
"text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.",
"title": ""
},
{
"docid": "10f3cafc05b3fb3b235df34aebbe0e23",
"text": "To cope with monolithic controller replicas and the current unbalance situation in multiphase converters, a pseudo-ramp current balance technique is proposed to achieve time-multiplexing current balance in voltage-mode multiphase DC-DC buck converter. With only one modulation controller, silicon area and power consumption caused by the replicas of controller can be reduced significantly. Current balance accuracy can be further enhanced since the mismatches between different controllers caused by process, voltage, and temperature variations are removed. Moreover, the offset cancellation control embedded in the current matching unit is used to eliminate intrinsic offset voltage existing at the operational transconductance amplifier for improved current balance. An explicit model, which contains both voltage and current balance loops with non-ideal effects, is derived for analyzing system stability. Experimental results show that current difference between each phase can be decreased by over 83% under both heavy and light load conditions.",
"title": ""
},
{
"docid": "358faa358eb07b8c724efcdb72334dc7",
"text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.",
"title": ""
},
{
"docid": "c0440776fdd2adab39e9a9ba9dd56741",
"text": "Corynebacterium glutamicum is an important industrial metabolite producer that is difficult to genetically engineer. Although the Streptococcus pyogenes (Sp) CRISPR-Cas9 system has been adapted for genome editing of multiple bacteria, it cannot be introduced into C. glutamicum. Here we report a Francisella novicida (Fn) CRISPR-Cpf1-based genome-editing method for C. glutamicum. CRISPR-Cpf1, combined with single-stranded DNA (ssDNA) recombineering, precisely introduces small changes into the bacterial genome at efficiencies of 86-100%. Large gene deletions and insertions are also obtained using an all-in-one plasmid consisting of FnCpf1, CRISPR RNA, and homologous arms. The two CRISPR-Cpf1-assisted systems enable N iterative rounds of genome editing in 3N+4 or 3N+2 days. A proof-of-concept, codon saturation mutagenesis at G149 of γ-glutamyl kinase relieves L-proline inhibition using Cpf1-assisted ssDNA recombineering. Thus, CRISPR-Cpf1-based genome editing provides a highly efficient tool for genetic engineering of Corynebacterium and other bacteria that cannot utilize the Sp CRISPR-Cas9 system.",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "de63a161a9539931f834908477fb5ad1",
"text": "Network function virtualization introduces additional complexity for network management through the use of virtualization environments. The amount of managed data and the operational complexity increases, which makes service assurance and failure recovery harder to realize. In response to this challenge, the paper proposes a distributed management function, called virtualized network management function (vNMF), to detect failures related to virtualized services. vNMF detects the failures by monitoring physical-layer statistics that are processed with a self-organizing map algorithm. Experimental results show that memory leaks and network congestion failures can be successfully detected and that and the accuracy of failure detection can be significantly improved compared to common k-means clustering.",
"title": ""
},
{
"docid": "5c40b6fadf2f8f4b39c7adf1e894e600",
"text": "Monitoring the flow of traffic along network paths is essential for SDN programming and troubleshooting. For example, traffic engineering requires measuring the ingress-egress traffic matrix; debugging a congested link requires determining the set of sources sending traffic through that link; and locating a faulty device might involve detecting how far along a path the traffic makes progress. Past path-based monitoring systems operate by diverting packets to collectors that perform \"after-the-fact\" analysis, at the expense of large data-collection overhead. In this paper, we show how to do more efficient \"during-the-fact\" analysis. We introduce a query language that allows each SDN application to specify queries independently of the forwarding state or the queries of other applications. The queries use a regular-expression-based path language that includes SQL-like \"groupby\" constructs for count aggregation. We track the packet trajectory directly on the data plane by converting the regular expressions into an automaton, and tagging the automaton state (i.e., the path prefix) in each packet as it progresses through the network. The SDN policies that implement the path queries can be combined with arbitrary packet-forwarding policies supplied by other elements of the SDN platform. A preliminary evaluation of our prototype shows that our \"during-the-fact\" strategy reduces data-collection overhead over \"after-the-fact\" strategies.",
"title": ""
},
{
"docid": "0499618380bc33d376160a770683e807",
"text": "As multicore and manycore processor architectures are emerging and the core counts per chip continue to increase, it is important to evaluate and understand the performance and scalability of Parallel Discrete Event Simulation (PDES) on these platforms. Most existing architectures are still limited to a modest number of cores, feature simple designs and do not exhibit heterogeneity, making it impossible to perform comprehensive analysis and evaluations of PDES on these platforms. Instead, in this paper we evaluate PDES using a full-system cycle-accurate simulator of a multicore processor and memory subsystem. With this approach, it is possible to flexibly configure the simulator and perform exploration of the impact of architecture design choices on the performance of PDES. In particular, we answer the following four questions with respect to PDES performance and scalability: (1) For the same total chip area, what is the best design point in terms of the number of cores and the size of the on-chip cache? (2) What is the impact of using in-order vs. out-of-order cores? (3) What is the impact of a heterogeneous system with a mix of in-order and out-of-order cores? (4) What is the impact of object partitioning on PDES performance in heterogeneous systems? To answer these questions, we use MARSSx86 simulator for evaluating performance, and rely on Cacti and McPAT tools to derive the area and latency estimates for cores and caches.",
"title": ""
},
{
"docid": "5a601e08824185bafeb94ac432b6e92e",
"text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",
"title": ""
},
{
"docid": "e58882a41c4335caf957105df192edc5",
"text": "Credit card fraud is a serious problem in financial services. Billions of dollars are lost due to credit card fraud every year. There is a lack of research studies on analyzing real-world credit card data owing to confidentiality issues. In this paper, machine learning algorithms are used to detect credit card fraud. Standard models are first used. Then, hybrid methods which use AdaBoost and majority voting methods are applied. To evaluate the model efficacy, a publicly available credit card data set is used. Then, a real-world credit card data set from a financial institution is analyzed. In addition, noise is added to the data samples to further assess the robustness of the algorithms. The experimental results positively indicate that the majority voting method achieves good accuracy rates in detecting fraud cases in credit cards.",
"title": ""
},
{
"docid": "3d5bbe4dcdc3ad787e57583f7b621e36",
"text": "A miniaturized antenna employing a negative index metamaterial with modified split-ring resonator (SRR) and capacitance-loaded strip (CLS) unit cells is presented for Ultra wideband (UWB) microwave imaging applications. Four left-handed (LH) metamaterial (MTM) unit cells are located along one axis of the antenna as the radiating element. Each left-handed metamaterial unit cell combines a modified split-ring resonator (SRR) with a capacitance-loaded strip (CLS) to obtain a design architecture that simultaneously exhibits both negative permittivity and negative permeability, which ensures a stable negative refractive index to improve the antenna performance for microwave imaging. The antenna structure, with dimension of 16 × 21 × 1.6 mm³, is printed on a low dielectric FR4 material with a slotted ground plane and a microstrip feed. The measured reflection coefficient demonstrates that this antenna attains 114.5% bandwidth covering the frequency band of 3.4-12.5 GHz for a voltage standing wave ratio of less than 2 with a maximum gain of 5.16 dBi at 10.15 GHz. There is a stable harmony between the simulated and measured results that indicate improved nearly omni-directional radiation characteristics within the operational frequency band. The stable surface current distribution, negative refractive index characteristic, considerable gain and radiation properties make this proposed negative index metamaterial antenna optimal for UWB microwave imaging applications.",
"title": ""
},
{
"docid": "406e06e00799733c517aff88c9c85e0b",
"text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.",
"title": ""
},
{
"docid": "cef4c47b512eb4be7dcadcee35f0b2ca",
"text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.",
"title": ""
},
{
"docid": "986a0b910a4674b3c4bf92a668780dd6",
"text": "One of the most important attributes of the polymerase chain reaction (PCR) is its exquisite sensitivity. However, the high sensitivity of PCR also renders it prone to falsepositive results because of, for example, exogenous contamination. Good laboratory practice and specific anti-contamination strategies are essential to minimize the chance of contamination. Some of these strategies, for example, physical separation of the areas for the handling samples and PCR products, may need to be taken into consideration during the establishment of a laboratory. In this chapter, different strategies for the detection, avoidance, and elimination of PCR contamination will be discussed.",
"title": ""
}
] |
scidocsrr
|
530f3888d99b1b7dd8a7446b3dfabb97
|
Requirements and languages for the semantic representation of manufacturing systems
|
[
{
"docid": "2464b1f28815b6f502f06ce6b45ef8ed",
"text": "In this paper we review and compare the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them. Ontology technology is nowadays mature enough: many methodologies, tools and languages are already available. The future work in this field should be driven towards the creation of a common integrated workbench for ontology developers to facilitate ontology development, exchange, evaluation, evolution and management, to provide methodological support for these tasks, and translations to and from different ontology languages. This workbench should not be created from scratch, but instead integrating the technology components that are currently available. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "204df6c32bde81851ebdb0a0b4d18b93",
"text": "Language experience systematically constrains perception of speech contrasts that deviate phonologically and/or phonetically from those of the listener’s native language. These effects are most dramatic in adults, but begin to emerge in infancy and undergo further development through at least early childhood. The central question addressed here is: How do nonnative speech perception findings bear on phonological and phonetic aspects of second language (L2) perceptual learning? A frequent assumption has been that nonnative speech perception can also account for the relative difficulties that late learners have with specific L2 segments and contrasts. However, evaluation of this assumption must take into account the fact that models of nonnative speech perception such as the Perceptual Assimilation Model (PAM) have focused primarily on naïve listeners, whereas models of L2 speech acquisition such as the Speech Learning Model (SLM) have focused on experienced listeners. This chapter probes the assumption that L2 perceptual learning is determined by nonnative speech perception principles, by considering the commonalities and complementarities between inexperienced listeners and those learning an L2, as viewed from PAM and SLM. Among the issues examined are how language learning may affect perception of phonetic vs. phonological information, how monolingual vs. multiple language experience may impact perception, and what these may imply for attunement of speech perception to changes in the listener’s language environment. Commonalities and complementarities 3",
"title": ""
},
{
"docid": "702df543119d648be859233bfa2b5d03",
"text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8230003e8be37867e0e4fc7320e24448",
"text": "This document was approved as policy of the American Psychological Association (APA) by the APA Council of Representatives in August, 2002. This document was drafted by a joint Task Force of APA Divisions 17 (Counseling Psychology) and 45 (The Society for the Psychological Study of Ethnic Minority Issues). These guidelines have been in the process of development for 22 years, so many individuals and groups require acknowledgement. The Divisions 17/45 writing team for the present document included Nadya Fouad, PhD, Co–Chair, Patricia Arredondo, EdD, Co– Chair, Michael D'Andrea, EdD and Allen Ivey, EdD. These guidelines build on work related to multicultural counseling competencies by Division 17 (Sue et al., 1982) and the Association of Multicultural Counseling and Development (Arredondo et al., 1996; Sue, Arredondo, & McDavis, 1992). The Task Force acknowledges Allen Ivey, EdD, Thomas Parham, PhD, and Derald Wing Sue, PhD for their leadership related to the work on competencies. The Divisions 17/45 writing team for these guidelines was assisted in reviewing the relevant literature by Rod Goodyear, PhD, Jeffrey S. Mio, PhD, Ruperto (Toti) Perez, PhD, William Parham, PhD, and Derald Wing Sue, PhD. Additional writing contributions came from Gail Hackett, PhD, Jeanne Manese, PhD, Louise Douce, PhD, James Croteau, PhD, Janet Helms, PhD, Sally Horwatt, PhD, Kathleen Boggs, PhD, Gerald Stone, PhD, and Kathleen Bieschke, PhD. Editorial contributions were provided by Nancy Downing Hansen, PhD, Patricia Perez, Tiffany Rice, and Dan Rosen. The Task Force is grateful for the active support and contributions of a series of presidents of APA Divisions 17, 35, and 45, including Rosie Bingham, PhD, Jean Carter, PhD, Lisa Porche Burke, PhD, Gerald Stone, PhD, Joseph Trimble, PhD, Melba Vasquez, PhD, and Jan Yoder, PhD. Other individuals who contributed through their advocacy include Guillermo Bernal, PhD, Robert Carter, PhD, J. Manuel Casas, PhD, Don Pope–Davis, PhD, Linda Forrest, PhD, Margaret Jensen, PhD, Teresa LaFromboise, PhD, Joseph G. Ponterotto, PhD, and Ena Vazquez Nuttall, EdD.",
"title": ""
},
{
"docid": "1314f4c6bafefd229f2a8b192ba881f7",
"text": "Face recognition is an area that has attracted a l ot of interest. Much of the research in this field was conducted using visible images. With visible cameras the recognition is prone to errors due to illumination changes. To avoid the problems encountered in the visible spectrum many authors ha ve proposed the use of infrared. In this paper we give an overview of the state of the art in face recognition using infrared images. Emphasis is given to more recent works. A growing fi eld n this area is multimodal fusion; work conducted in this field is also presented in th is paper and publicly available Infrared face image databases are introduced.",
"title": ""
},
{
"docid": "e90755afe850d597ad7b3f4b7e590b66",
"text": "Privacy is considered to be a fundamental human right (Movius and Krup, 2009). Around the world this has led to a large amount of legislation in the area of privacy. Nearly all national governments have imposed local privacy legislation. In the United States several states have imposed their own privacy legislation. In order to maintain a manageable scope this paper only addresses European Union wide and federal United States laws. In addition several US industry (self) regulations are also considered. Privacy regulations in emerging technologies are surrounded by uncertainty. This paper aims to clarify the uncertainty relating to privacy regulations with respect to Cloud Computing and to identify the main open issues that need to be addressed for further research. This paper is based on existing literature and a series of interviews and questionnaires with various Cloud Service Providers (CSPs) that have been performed for the first author’s MSc thesis (Ruiter, 2009). The interviews and questionnaires resulted in data on privacy and security procedures from ten CSPs and while this number is by no means large enough to make any definite conclusions the results are, in our opinion, interesting enough to publish in this paper. The remainder of the paper is organized as follows: the next section gives some basic background on Cloud Computing. Section 3 provides",
"title": ""
},
{
"docid": "2e3cee13657129d26ec236f9d2641e6c",
"text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds",
"title": ""
},
{
"docid": "3fce18c6e1f909b91f95667a563aa194",
"text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.",
"title": ""
},
{
"docid": "a91a57326a2d961e24d13b844a3556cf",
"text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.",
"title": ""
},
{
"docid": "1e1706e1bd58a562a43cc7719f433f4f",
"text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.",
"title": ""
},
{
"docid": "d3a8457c4c65652855e734556652c6be",
"text": "We consider a supervised learning problem in which data are revealed sequentially and the goal is to determine what will next be revealed. In the context of this problem, algorithms based on association rules have a distinct advantage over classical statistical and machine learning methods; however, there has not previously been a theoretical foundation established for using association rules in supervised learning. We present two simple algorithms that incorporate association rules, and provide generalization guarantees on these algorithms based on algorithmic stability analysis from statistical learning theory. We include a discussion of the strict minimum support threshold often used in association rule mining, and introduce an “adjusted confidence” measure that provides a weaker minimum support condition that has advantages over the strict minimum support. The paper brings together ideas from statistical learning theory, association rule mining and Bayesian analysis.",
"title": ""
},
{
"docid": "a88c0d45ca7859c050e5e76379f171e6",
"text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.",
"title": ""
},
{
"docid": "5227c1679d83168eeb4d82d9a94a3a0f",
"text": "Driver decisions and behaviors regarding the surrounding traffic are critical to traffic safety. It is important for an intelligent vehicle to understand driver behavior and assist in driving tasks according to their status. In this paper, the consumer range camera Kinect is used to monitor drivers and identify driving tasks in a real vehicle. Specifically, seven common tasks performed by multiple drivers during driving are identified in this paper. The tasks include normal driving, left-, right-, and rear-mirror checking, mobile phone answering, texting using a mobile phone with one or both hands, and the setup of in-vehicle video devices. The first four tasks are considered safe driving tasks, while the other three tasks are regarded as dangerous and distracting tasks. The driver behavior signals collected from the Kinect consist of a color and depth image of the driver inside the vehicle cabin. In addition, 3-D head rotation angles and the upper body (hand and arm at both sides) joint positions are recorded. Then, the importance of these features for behavior recognition is evaluated using random forests and maximal information coefficient methods. Next, a feedforward neural network (FFNN) is used to identify the seven tasks. Finally, the model performance for task recognition is evaluated with different features (body only, head only, and combined). The final detection result for the seven driving tasks among five participants achieved an average of greater than 80% accuracy, and the FFNN tasks detector is proved to be an efficient model that can be implemented for real-time driver distraction and dangerous behavior recognition.",
"title": ""
},
{
"docid": "222b853f23cbcea9794c83c1471273b8",
"text": "Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.",
"title": ""
},
{
"docid": "84f1cdf2729e206bf56d336e0c09d9d9",
"text": "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net [30] for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO [20], DeepFashion [21, 23], shoes [43], Market-1501 [47] and handbags [49] the approach demonstrates significant improvements over the state-of-the-art.",
"title": ""
},
{
"docid": "ccd5f02b97643b3c724608a4e4a67fdb",
"text": "Modular robotic systems that integrate distally with commercially available endoscopic equipment have the potential to improve the standard-of-care in therapeutic endoscopy by granting clinicians with capabilities not present in commercial tools, such as precision dexterity and feedback sensing. With the desire to integrate both sensing and actuation distally for closed-loop position control in fully deployable, endoscope-based robotic modules, commercial sensor and actuator options that acquiesce to the strict form-factor requirements are sparse or nonexistent. Herein, we describe a proprioceptive angle sensor for potential closed-loop position control applications in distal robotic modules. Fabricated monolithically using printed-circuit MEMS, the sensor employs a kinematic linkage and the principle of light intensity modulation to sense the angle of articulation with a high degree of fidelity. Onboard temperature and environmental irradiance measurements, coupled with linear regression techniques, provide robust angle measurements that are insensitive to environmental disturbances. The sensor is capable of measuring $\\pm$45 degrees of articulation with an RMS error of 0.98 degrees. An ex vivo demonstration shows that the sensor can give real-time proprioceptive feedback when coupled with an actuator module, opening up the possibility of fully distal closed-loop control.",
"title": ""
},
{
"docid": "17797efad4f13f961ed300316eb16b6b",
"text": "Cellular senescence, which has been linked to age-related diseases, occurs during normal aging or as a result of pathological cell stress. Due to their incapacity to proliferate, senescent cells cannot contribute to normal tissue maintenance and tissue repair. Instead, senescent cells disturb the microenvironment by secreting a plethora of bioactive factors that may lead to inflammation, regenerative dysfunction and tumor progression. Recent understanding of stimuli and pathways that induce and maintain cellular senescence offers the possibility to selectively eliminate senescent cells. This novel strategy, which so far has not been tested in humans, has been coined senotherapy or senolysis. In mice, senotherapy proofed to be effective in models of accelerated aging and also during normal chronological aging. Senotherapy prolonged lifespan, rejuvenated the function of bone marrow, muscle and skin progenitor cells, improved vasomotor function and slowed down atherosclerosis progression. While initial studies used genetic approaches for the killing of senescent cells, recent approaches showed similar effects with senolytic drugs. These observations open up exciting possibilities with a great potential for clinical development. However, before the integration of senotherapy into patient care can be considered, we need further research to improve our insight into the safety and efficacy of this strategy during short- and long-term use.",
"title": ""
},
{
"docid": "9f037fd53e6547b689f88fc1c1bed10a",
"text": "We study feature selection as a means to optimize the baseline clickbait detector employed at the Clickbait Challenge 2017 [6]. The challenge’s task is to score the “clickbaitiness” of a given Twitter tweet on a scale from 0 (no clickbait) to 1 (strong clickbait). Unlike most other approaches submitted to the challenge, the baseline approach is based on manual feature engineering and does not compete out of the box with many of the deep learning-based approaches. We show that scaling up feature selection efforts to heuristically identify better-performing feature subsets catapults the performance of the baseline classifier to second rank overall, beating 12 other competing approaches and improving over the baseline performance by 20%. This demonstrates that traditional classification approaches can still keep up with deep learning on this task.",
"title": ""
},
{
"docid": "81fc9abd3e2ad86feff7bd713cff5915",
"text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.",
"title": ""
},
{
"docid": "b4cb716b235ece6ee647fc17b6bb13b6",
"text": "Prof. Jay W. Forrester pioneered industrial Dynamics. It enabled the management scientists to understand well enough the dynamics of change in economics/business systems. Four basic foundations on which System Dynamics rest were discussed. The thought process prevailing and their shortcomings are pointed out and the success story of System Dynamics was explained with the help of Production-Distribution model. System Dynamics graduated to Learning Organisations. Senge with his concept of integrating five distinct disciplines of Systems Thinking, Personal Mastery, Mental Models, Shared Vision and Team Learning succeeded in bringing forth the System Dynamics to the reach of large number of practitioners and teachers of management. However, Systems Thinking part of the Learning Organisation fails to reach out because it lacks the architecture needed to support it. Richmond provided the much-needed architectural support. It enables the mapping language to be economical, consistent and relate to the dynamic behaviour of the system. Progression from Industrial Dynamics to Systems Thinking has been slow due to different postures taken by the professionals. It is suggested that Systems Thinking has a lot to adopt from different disciplines and should celebrate synergies and avail cross-fertilisation or opportunities. Systems Thinking is transparent and can seamlessly leverage the way the business is performed. ★ A. K. Rao is Member of Faculty at Administrative Staff College of India, Bellavista, Hyderabad 500 082, India. E-mail: akrao@ascihyd.org and A.Subash Babu is Professor in Industrial Engineering and Operations Research at Indian Institute of Technology, Bombay 400 076, India E-mail: subash@me.iitb.ernet.in Industrial Dynamics to Systems Thinking A.K.Rao & A. Subash Babu Introduction: In the year 1958, the first words penned down by the pioneer of System Dynamics (then Industrial Dynamics) Jay W. Forrester were “Management is on the verge of a major breakthrough in understanding how industrial company success depends on the interaction between the flows of information, materials, manpower and capital equipment”. The article titled “Industrial Dynamics: A Major Breakthrough for Decision Makers” in Harvard Business Review attracted attention of management scientists. Several controversies arose when further articles appeared subsequently. Today, 40 years since the first article in the field of System Dynamics appeared in print, the progress when evaluated evokes mixed response. If it were a major breakthrough for decisionmakers, then why did it not proliferate into the curriculum of business schools as common as that of Principles of Management or Business Statistics or any other standard subjects of study? The purpose of this article is to critically review three seminal works in the field of System Dynamics: Industrial Dynamics by Jay W. Forrester (1960), Fifth Discipline: The Art and Practice of Learning Organisations by Peter Senge (1990) and Systems Thinking by Barry Richmond (1997) and to understand the pitfalls in reaching out to the large body of academia and practising managers. Forrester in his work raised a few fundamental issues way back in early 60’s that most of the corporate managers are able to comprehend only now. He clearly answered the question on what is the next frontier of our knowledge. The great advances and opportunities in the future he predicted would appear in the field of management and economics. The shift from technical to the social front was evidenced in the way global competition and the rules of the game changed. The test of leadership is to show the way to economic development and stability. The leading question therefore is whether we understand well enough the dynamics of change in economic/business systems to pioneer this new frontier? Forrester offered the much-needed solution: System Dynamics. The foundations for a body of knowledge called system dynamics were the concepts of servomechanism, controlled experiments, and digital computing and better understanding of control theory. Servomechanism of information feedback theory was evolved during the World War II. Till then, time delays, amplification effects and the structure of the system were taken for granted. The realisation that interaction between components is more crucial to the system behaviour than the components themselves are of recent origin. The thesis out of information-feedback study led to the conclusion that information-feedback system is all pervasive in the nature. It exists whenever the environment changes, and leads to a decision that results in action, which in-turn affects the environment. This leads us to an axiom that everything that we do as an individual, as an organisation, as an industry, as a nation, or even as a society irrespective of the divisibility of the unit is done in the context of information-feedback system. This is the bedrock philosophy of system dynamics. The second foundation is the realisation of the importance of the experimental approach to understanding of system dynamics. The standard acceptable format of research study of going from general analytical solution to the particular special case was reversed to the empirical approach. In this format a number of particular situations were studied and from these generalisations were inferred. This is the basis for learning. The activity basis for learning is experience. Some of these generalisations were given a name by Senge (1990) as Nature’s Templates. The third foundation for progress of system dynamics was digital computing machines. By 1945, systems of twenty variables were difficult to handle. By 1955, the digital computer appeared, opening the way to the simulation of systems far beyond the capability of analogue machines. Models of 2000 and more variables with out any restrictions on representing non-linear phenomena could easily be simulated on a digital computer at costs within the reach of the academia and the research organisations. The simulation of information feedback models of important managerial and economic questions is an area demanding high efficiency. A cost reduction factor of ten thousand or more in computation infrastructure placed one in a completely different environment than that existed a few years ago. The fourth foundation was better appreciation of policy and decision. There is an orderly basis that prescribes most of our present managerial decisions. These decisions are not entirely adhoc but are strongly conditioned by the environment. This being so, policies governing decisions can be laid down and their effect on economic/business behaviour can be studied. Forrester’s Postulates and Applications : The idea that economic and industrial systems could be depicted through linear analysis was the major stumbling block to begin thinking dynamically. Most of the policy analysis goes on to define the problem on hand as narrowly as possible in the name of attaining the objective of being specific and crisp. On one hand, it enables the mathematics of such analysis tractable but unfortunately, it ignores the fact that almost every factor in the economic or industrial systems is non-linear. Much of the important behaviour of the system is the direct manifestation of nonlinear characteristic of the system components. Social systems are assumed to be inherently stable and that they constantly seek to achieve the equilibrium status. While it is the system’s tendency to r each the equ ilibrium in its inanimate consideration, the players in the system keep working towards disturbing the equilibrium conditions. Perfect market is the stated goal of the simple economic system with its most important components the supply and the demand trying to equal each other in the long run. But during this period, the players in the market disturb the initial conditions by several means such as inducing technology, introducing substitutes, differentiating the products etc. which makes that the seemingly achievable perfect market an impossible dream. Therefore, the notion of sustainable competitive advantage is only fleeting in nature. The analysis used for solving the market problems with an assumption of stable systems in thus not valid. There appears ample evidence that much of our industrial and economic systems exhibit behaviours characterised by instability. Mathematical economics and management science have often been more closely allied to formal mathematics than to economics or management. The difference of orientation is glaringly evident on comparison of business literature with publications on management science. Another evidence of the bias towards mathematical rather than managerial motivation is seen in preoccupation in optimum solutions. In the linear analysis, the first action that is performed is to define the objective function. Thus specifying the purpose of a model of an economic system being its ability to predict specific future action. Further, it is used to validate the model. Models are required to predict the character and the nature of the system in question so that redesign could take place in congruence with the desired state. This is entirely different and more useful than the objective functions, which provide the events such as specific future times of peaks or valleys such as in a sales curve. It is a belief that a model must be limited to considering those variables, which have generally accepted definitions and must have objective value, attached to them. Many undefined concepts are known to be of crucial importance to business systems, which are known as soft variables. Linear models are not capable of capturing these details in the traditional methodology of problem solving. If the subjective matters are considered to be of crucial importance to the business system behaviour, it must be conceded that they must some how be incorporated in the model. Therefore, it is necessary to provide legi",
"title": ""
},
{
"docid": "6416eb9235954730b8788b7b744d9e5b",
"text": "This paper presents a machine learning based handover management scheme for LTE to improve the Quality of Experience (QoE) of the user in the presence of obstacles. We show that, in this scenario, a state-of-the-art handover algorithm is unable to select the appropriate target cell for handover, since it always selects the target cell with the strongest signal without taking into account the perceived QoE of the user after the handover. In contrast, our scheme learns from past experience how the QoE of the user is affected when the handover was done to a certain eNB. Our performance evaluation shows that the proposed scheme substantially improves the number of completed downloads and the average download time compared to state-of-the-art. Furthermore, its performance is close to an optimal approach in the coverage region affected by an obstacle.",
"title": ""
}
] |
scidocsrr
|
d509b37e02c7bac38510425ee7e46dd1
|
Appearance-based gaze estimation in the wild
|
[
{
"docid": "3ce39c23ef5be4dd8fd10152ded95a6e",
"text": "Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.",
"title": ""
}
] |
[
{
"docid": "4650411615ad68be9596e5de3c0613f1",
"text": "Based on the limitations of traditional English class, an English listening class was designed by Edmodo platform through making use of the advantages of flipped classroom. On this class, students will carry out online autonomous learning before class, teacher will guide students learning collaboratively in class, as well as after-school reflection and summary will be realized. By analyzing teaching effect on flipped classroom, it can provide reference and teaching model for English listening classes in local universities.",
"title": ""
},
{
"docid": "e78d88143d6a83ab5f43f06e406e5326",
"text": "The mother–infant bond provides the foundation for the infant's future mental health and adaptation and depends on the provision of species-typical maternal behaviors that are supported by neuroendocrine and motivation-affective neural systems. Animal research has demonstrated that natural variations in patterns of maternal care chart discrete profiles of maternal brain–behavior relationships that uniquely shape the infant's lifetime capacities for stress regulation and social affiliation. Such patterns of maternal care are mediated by the neuropeptide Oxytocin and by stress- and reward-related neural systems. Human studies have similarly shown that maternal synchrony—the coordination of maternal behavior with infant signals—and intrusiveness—the excessive expression of maternal behavior—describe distinct and stable maternal styles that bear long-term consequences for infant well-being. To integrate brain, hormones, and behavior in the study of maternal–infant bonding, we examined the fMRI responses of synchronous vs intrusive mothers to dynamic, ecologically valid infant videos and their correlations with plasma Oxytocin. In all, 23 mothers were videotaped at home interacting with their infants and plasma OT assayed. Sessions were micro-coded for synchrony and intrusiveness. Mothers were scanned while observing several own and standard infant-related vignettes. Synchronous mothers showed greater activations in the left nucleus accumbens (NAcc) and intrusive mothers exhibited higher activations in the right amygdala. Functional connectivity analysis revealed that among synchronous mothers, left NAcc and right amygdala were functionally correlated with emotion modulation, theory-of-mind, and empathy networks. Among intrusive mothers, left NAcc and right amygdala were functionally correlated with pro-action areas. Sorting points into neighborhood (SPIN) analysis demonstrated that in the synchronous group, left NAcc and right amygdala activations showed clearer organization across time, whereas among intrusive mothers, activations of these nuclei exhibited greater cross-time disorganization. Correlations between Oxytocin with left NAcc and right amygdala activations were found only in the synchronous group. Well-adapted parenting appears to be underlay by reward-related motivational mechanisms, temporal organization, and affiliation hormones, whereas anxious parenting is likely mediated by stress-related mechanisms and greater neural disorganization. Assessing the integration of motivation and social networks into unified neural activity that reflects variations in patterns of parental care may prove useful for the study of optimal vs high-risk parenting.",
"title": ""
},
{
"docid": "6e30761b695e22a29f98a051dbccac6f",
"text": "This paper explores the use of clickthrough data for query spelling correction. First, large amounts of query-correction pairs are derived by analyzing users' query reformulation behavior encoded in the clickthrough data. Then, a phrase-based error model that accounts for the transformation probability between multi-term phrases is trained and integrated into a query speller system. Experiments are carried out on a human-labeled data set. Results show that the system using the phrase-based error model outperforms significantly its baseline systems.",
"title": ""
},
{
"docid": "ed72c4d4bd7b4e063ebddf75127bb7db",
"text": "Microfabrication of graphene devices used in many experimental studies currently relies on the fact that graphene crystallites can be visualized using optical microscopy if prepared on top of Si wafers with a certain thickness of SiO2. The authors study graphene’s visibility and show that it depends strongly on both thickness of SiO2 and light wavelength. They have found that by using monochromatic illumination, graphene can be isolated for any SiO2 thickness, albeit 300 nm the current standard and, especially, 100 nm are most suitable for its visual detection. By using a Fresnel-law-based model, they quantitatively describe the experimental data. © 2007 American Institute of Physics. DOI: 10.1063/1.2768624",
"title": ""
},
{
"docid": "c5231a58c294d8580723070e638d3f44",
"text": "This study employed Aaker's brand personality framework to empirically investigate the personality of denim jeans brands and to examine the impact of brand personality on consumer satisfaction and brand loyalty based on data collected from 474 college students. Results revealed that the personality of denim jeans brands can be described in six dimensions with 51 personality traits: attractiveness, practicality, ruggedness, flexibility, friendliness, and honesty. The results indicated that consumers associate particular brand personality dimensions with denim jeans brands. Also, the various dimensions of brand personality have different effects on consumer satisfaction and consumer brand loyalty.",
"title": ""
},
{
"docid": "11afe3e3e94ca2ec411f38bf1b0b2e82",
"text": "The requirements engineering program at Siemens Corporate Research has been involved with process improvement, training and project execution across many of the Siemens operating companies. We have been able to observe and assist with process improvement in mainly global software development efforts. Other researchers have reported extensively on various aspects of distributed requirements engineering, but issues specific to organizational structure have not been well categorized. Our experience has been that organizational and other management issues can overshadow technical problems caused by globalization. This paper describes some of the different organizational structures we have encountered, the problems introduced into requirements engineering processes by these structures, and techniques that were effective in mitigating some of the negative effects of global software development.",
"title": ""
},
{
"docid": "c3f81c5e4b162564b15be399b2d24750",
"text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.",
"title": ""
},
{
"docid": "fe2594f98faa2ceda8b2c25bddc722d1",
"text": "This study aimed at investigating the effect of a suggested EFL Flipped Classroom Teaching Model (EFL-FCTM) on graduate students' English higher-order thinking skills (HOTS), engagement and satisfaction. Also, it investigated the relationship between higher-order thinking skills, engagement and satisfaction. The sample comprised (67) graduate female students; an experimental group (N=33) and a control group (N=34), studying an English course at Taif University, KSA. The study used mixed method design; a pre-post HOTS test was carried out and two 5-Likert scale questionnaires had been designed and distributed; an engagement scale and a satisfaction scale. The findings of the study revealed statistically significant differences between the two group in HOTS in favor of the experimental group. Also, there was significant difference between the pre and post administration of the engagement scale in favor of the post administration. Moreover, students satisfaction on the (EFL-FCTM) was high. Finally, there were high significant relationships between HOTS and student engagement, HOTS and satisfaction and between student engagement and satisfaction.",
"title": ""
},
{
"docid": "ab2e9a230c9aeec350dff6e3d239c7d8",
"text": "Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this paper, we aim to endow state of the art face recognition SDKs with robustness to facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression image from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases, namely Multi-PIE and AR, show significant performance improvement of the commercial SDK to deal with expression and pose variations and demonstrates the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
},
{
"docid": "64cefd949f61afe81fbbb9ca1159dd4a",
"text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR",
"title": ""
},
{
"docid": "4b09424630d5e27f1ed32b5798674595",
"text": "Tampering detection has been increasingly attracting attention in the field of digital forensics. As a popular nonlinear smoothing filter, median filtering is often used as a post-processing operation after image forgeries such as copy-paste forgery (including copy-move and image splicing), which is of particular interest to researchers. To implement the blind detection of median filtering, this paper proposes a novel approach based on a frequency-domain feature coined the annular accumulated points (AAP). Experimental results obtained on widely used databases, which consists of various real-world photos, show that the proposed method achieves outstanding performance in distinguishing median-filtered images from original images or images that have undergone other types of manipulations, especially in the scenarios of low resolution and JPEG compression with a low quality factor. Moreover, our approach remains reliable even when the feature dimension decreases to 5, which is significant to save the computing time required for classification, demonstrating its great advantage to be applied in real-time processing of big multimedia data.",
"title": ""
},
{
"docid": "6f4fe7bc805c4b635d6c201d8ea1f53c",
"text": "In this paper we focus on the automatic identification of bird species from their audio recorded song. Bird monitoring is important to perform several tasks, such as to evaluate the quality of their living environment or to monitor dangerous situations to planes caused by birds near airports. We deal with the bird species identification problem using signal processing and machine learning techniques. First, features are extracted from the bird recorded songs using specific audio treatment, next the problem is performed according to a classical machine learning scenario, where a labeled database of previously known bird songs are employed to create a decision procedure that is used to predict the species of a new bird song. Experiments are conducted in a dataset of recorded songs of bird species which appear in a specific region. The experimental results compare the performance obtained in different situations, encompassing the complete audio signals, as recorded in the field, and short audio segments (pulses) obtained from the signals by a split procedure. The influence of the number of classes (bird species) in the identification accuracy is also evaluated.",
"title": ""
},
{
"docid": "7696178f143665fa726706e39b133cb8",
"text": "This article describes the essential components of oral health information systems for the analysis of trends in oral disease and the evaluation of oral health programmes at the country, regional and global levels. Standard methodology for the collection of epidemiological data on oral health has been designed by WHO and used by countries worldwide for the surveillance of oral disease and health. Global, regional and national oral health databanks have highlighted the changing patterns of oral disease which primarily reflect changing risk profiles and the implementation of oral health programmes oriented towards disease prevention and health promotion. The WHO Oral Health Country/Area Profile Programme (CAPP) provides data on oral health from countries, as well as programme experiences and ideas targeted to oral health professionals, policy-makers, health planners, researchers and the general public. WHO has developed global and regional oral health databanks for surveillance, and international projects have designed oral health indicators for use in oral health information systems for assessing the quality of oral health care and surveillance systems. Modern oral health information systems are being developed within the framework of the WHO STEPwise approach to surveillance of noncommunicable, chronic disease, and data stored in the WHO Global InfoBase may allow advanced health systems research. Sound knowledge about progress made in prevention of oral and chronic disease and in health promotion may assist countries to implement effective public health programmes to the benefit of the poor and disadvantaged population groups worldwide.",
"title": ""
},
{
"docid": "553de71fcc3e4e6660015632eee751b1",
"text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.",
"title": ""
},
{
"docid": "fa246c15531c6426cccaf4d216dc8375",
"text": "Proboscis lateralis is a rare craniofacial malformation characterized by absence of nasal cavity on one side with a trunk-like nasal appendage protruding from superomedial portion of the ipsilateral orbit. High-resolution computed tomography and magnetic resonance imaging are extremely useful in evaluating this congenital condition and the wide spectrum of associated anomalies occurring in the surrounding anatomical regions and brain. We present a case of proboscis lateralis in a 2-year-old girl with associated ipsilateral sinonasal aplasia, orbital cyst, absent olfactory bulb and olfactory tract. Absence of ipsilateral olfactory pathway in this rare disorder has been documented on high-resolution computed tomography and magnetic resonance imaging by us for the first time in English medical literature.",
"title": ""
},
{
"docid": "fe8d20422454f095c5a14bce3523748d",
"text": "This paper Put forward a glass crack detection algorithm based on digital image processing technology, obtain identification information of glass surface crack image by making use of pre-processing, image segmentation, feature extraction on the glass crack image, calculate the target area and perimeter of the roundness index to judge whether this image with a crack, make use of Visual Basic6.0 programming language to impolder the crack detection system, achieve the function of each part in crack detection process.",
"title": ""
},
{
"docid": "53afafd2fc1087989a975675ff4098d8",
"text": "The sixth generation of IEEE 802.11 wireless local area networks is under developing in the Task Group 802.11ax. One main physical layer (PHY) novel feature in the IEEE 802.11ax amendment is the specification of orthogonal frequency division multiplexing (OFDM) uplink multi-user multiple-input multiple-output (UL MU-MIMO) techniques. A challenge issue to implement UL MU-MIMO in OFDM PHY is the mitigation of the relative carrier frequency offset (CFO), which can cause intercarrier interference and rotation of the constellation of received symbols, and, consequently, degrading the system performance dramatically if it is not properly mitigated. In this paper, we show that a frequency domain CFO estimation and correction scheme implemented at both transmitter (Tx) and receiver (Rx) coupled with pre-compensation approach at the Tx can decrease the negative effects of the relative CFO.",
"title": ""
},
{
"docid": "51da4d5923b30db560227155edd0621d",
"text": "The fifth generation wireless 5G development initiative is based upon 4G, which at present is struggling to meet its performance goals. The comparison between 3G and 4G wireless communication systems in relation to its architecture, speed, frequency band, switching design basis and forward error correction is studied, and were discovered that their performances are still unable to solve the unending problems of poor coverage, bad interconnectivity, poor quality of service and flexibility. An ideal 5G model to accommodate the challenges and shortfalls of 3G and 4G deployments is discussed as well as the significant system improvements on the earlier wireless technologies. The radio channel propagation characteristics for 4G and 5G systems is discussed. Major advantages of 5G network in providing myriads of services to end users personalization, terminal and network heterogeneity, intelligence networking and network convergence among other benefits are highlighted.The significance of the study is evaluated for a fast and effective connection and communication of devices like mobile phones and computers, including the capability of supporting and allowing a highly flexible network connectivity.",
"title": ""
},
{
"docid": "7b552767a37a7d63591471195b2e002b",
"text": "Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.",
"title": ""
}
] |
scidocsrr
|
e1640b20b57f2db83b41db76947416dc
|
Data Mining in the Dark : Darknet Intelligence Automation
|
[
{
"docid": "22bdd2c36ef72da312eb992b17302fbe",
"text": "In this paper, we present an operational system for cyber threat intelligence gathering from various social platforms on the Internet particularly sites on the darknet and deepnet. We focus our attention to collecting information from hacker forum discussions and marketplaces offering products and services focusing on malicious hacking. We have developed an operational system for obtaining information from these sites for the purposes of identifying emerging cyber threats. Currently, this system collects on average 305 high-quality cyber threat warnings each week. These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack. This provides a significant service to cyber-defenders. The system is significantly augmented through the use of various data mining and machine learning techniques. With the use of machine learning models, we are able to recall 92% of products in marketplaces and 80% of discussions on forums relating to malicious hacking with high precision. We perform preliminary analysis on the data collected, demonstrating its application to aid a security expert for better threat analysis.",
"title": ""
},
{
"docid": "6d31ee4b0ad91e6500c5b8c7e3eaa0ca",
"text": "A host of tools and techniques are now available for data mining on the Internet. The explosion in social media usage and people reporting brings a new range of problems related to trust and credibility. Traditional media monitoring systems have now reached such sophistication that real time situation monitoring is possible. The challenge though is deciding what reports to believe, how to index them and how to process the data. Vested interests allow groups to exploit both social media and traditional media reports for propaganda purposes. The importance of collecting reports from all sides in a conflict and of balancing claims and counter-claims becomes more important as ease of publishing increases. Today the challenge is no longer accessing open source information but in the tagging, indexing, archiving and analysis of the information. This requires the development of general-purpose and domain specific knowledge bases. Intelligence tools are needed which allow an analyst to rapidly access relevant data covering an evolving situation, ranking sources covering both facts and opinions.",
"title": ""
}
] |
[
{
"docid": "a854ee8cf82c4bd107e93ed0e70ee543",
"text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.",
"title": ""
},
{
"docid": "bc6877a5a83531a794ac1c8f7a4c7362",
"text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.",
"title": ""
},
{
"docid": "a33486dfec199cd51e885d6163082a96",
"text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.",
"title": ""
},
{
"docid": "7394f3000da8af0d4a2b33fed4f05264",
"text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.",
"title": ""
},
{
"docid": "2216f853543186e73b1149bb5a0de297",
"text": "Scaffolds have been utilized in tissue regeneration to facilitate the formation and maturation of new tissues or organs where a balance between temporary mechanical support and mass transport (degradation and cell growth) is ideally achieved. Polymers have been widely chosen as tissue scaffolding material having a good combination of biodegradability, biocompatibility, and porous structure. Metals that can degrade in physiological environment, namely, biodegradable metals, are proposed as potential materials for hard tissue scaffolding where biodegradable polymers are often considered as having poor mechanical properties. Biodegradable metal scaffolds have showed interesting mechanical property that was close to that of human bone with tailored degradation behaviour. The current promising fabrication technique for making scaffolds, such as computation-aided solid free-form method, can be easily applied to metals. With further optimization in topologically ordered porosity design exploiting material property and fabrication technique, porous biodegradable metals could be the potential materials for making hard tissue scaffolds.",
"title": ""
},
{
"docid": "501f9cb511e820c881c389171487f0b4",
"text": "An omnidirectional circularly polarized (CP) antenna array is proposed. The antenna array is composed of four identical CP antenna elements and one parallel strip-line feeding network. Each of CP antenna elements comprises a dipole and a zero-phase-shift (ZPS) line loop. The in-phase fed dipole and the ZPS line loop generate vertically and horizontally polarized omnidirectional radiation, respectively. Furthermore, the vertically polarized dipole is positioned in the center of the horizontally polarized ZPS line loop. The size of the loop is designed such that a 90° phase difference is realized between the two orthogonal components because of the spatial difference and, therefore, generates CP omnidirectional radiation. A 1 × 4 antenna array at 900 MHz is prototyped and targeted to ultra-high frequency (UHF) radio frequency identification (RFID) applications. The measurement results show that the antenna array achieves a 10-dB return loss over a frequency range of 900-935 MHz and 3-dB axial-ratio (AR) from 890 to 930 MHz. At the frequency of 915 MHz, the measured maximum AR of 1.53 dB, maximum gain of 5.4 dBic, and an omnidirectionality of ±1 dB are achieved.",
"title": ""
},
{
"docid": "58d19a5460ce1f830f7a5e2cb1c5ebca",
"text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.",
"title": ""
},
{
"docid": "54bdabea83e86d21213801c990c60f4d",
"text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.",
"title": ""
},
{
"docid": "b5babae9b9bcae4f87f5fe02459936de",
"text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.",
"title": ""
},
{
"docid": "19b8acf4e5c68842a02e3250c346d09b",
"text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.",
"title": ""
},
{
"docid": "fe903498e0c3345d7e5ebc8bf3407c2f",
"text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.",
"title": ""
},
{
"docid": "07a6de40826f4c5bab4a8b8c51aba080",
"text": "Prior studies on alternative work schedules have focused primarily on the main effects of compressed work weeks and shift work on individual outcomes. This study explores the combined effects of alternative and preferred work schedules on nurses' satisfaction with their work schedules, perceived patient care quality, and interferences with their personal lives.",
"title": ""
},
{
"docid": "62ff5888ad0c8065097603da8ff79cd6",
"text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.",
"title": ""
},
{
"docid": "3910a3317ea9ff4ea6c621e562b1accc",
"text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.",
"title": ""
},
{
"docid": "263c04402cfe80649b1d3f4a8578e99b",
"text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.",
"title": ""
},
{
"docid": "06755f8680ee8b43e0b3d512b4435de4",
"text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.",
"title": ""
},
{
"docid": "cc9f566eb8ef891d76c1c4eee7e22d47",
"text": "In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments",
"title": ""
},
{
"docid": "3b5340113d583b138834119614046151",
"text": "This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math></inline-formula> and a performance indicator <inline-formula><tex-math notation=\"LaTeX\">$\\rho$ </tex-math></inline-formula> (the ratio of <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math> </inline-formula> with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.",
"title": ""
},
{
"docid": "3ea021309fd2e729ffced7657e3a6038",
"text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.",
"title": ""
},
{
"docid": "637e73416c1a6412eeeae63e1c73c2c3",
"text": "Disgust, an emotion related to avoiding harmful substances, has been linked to moral judgments in many behavioral studies. However, the fact that participants report feelings of disgust when thinking about feces and a heinous crime does not necessarily indicate that the same mechanisms mediate these reactions. Humans might instead have separate neural and physiological systems guiding aversive behaviors and judgments across different domains. The present interdisciplinary study used functional magnetic resonance imaging (n = 50) and behavioral assessment to investigate the biological homology of pathogen-related and moral disgust. We provide evidence that pathogen-related and sociomoral acts entrain many common as well as unique brain networks. We also investigated whether morality itself is composed of distinct neural and behavioral subdomains. We provide evidence that, despite their tendency to elicit similar ratings of moral wrongness, incestuous and nonsexual immoral acts entrain dramatically separate, while still overlapping, brain networks. These results (i) provide support for the view that the biological response of disgust is intimately tied to immorality, (ii) demonstrate that there are at least three separate domains of disgust, and (iii) suggest strongly that morality, like disgust, is not a unified psychological or neurological phenomenon.",
"title": ""
}
] |
scidocsrr
|
af2ea562f86464b226a770038a6a57b4
|
Automatic Liver Lesion Segmentation Using A Deep Convolutional Neural Network Method
|
[
{
"docid": "257afbcb213cd7c1733bb31fea4aa25d",
"text": "Automatic segmentation of the liver and its lesion is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT abdomen images using cascaded fully convolutional neural networks (CFCNs) and dense 3D conditional random fields (CRFs). We train and cascade two FCNs for a combined segmentation of the liver and its lesions. In the first step, we train a FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions from the predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a dense 3D CRF that accounts for both spatial coherence and appearance. CFCN models were trained in a 2-fold cross-validation on the abdominal CT dataset 3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for liver with computation times below 100s per volume. We experimentally demonstrate the robustness of the proposed method as a decision support system with a high accuracy and speed for usage in daily clinical routine.",
"title": ""
}
] |
[
{
"docid": "848c8ffaa9d58430fbdebd0e9694d531",
"text": "This paper presents an application for studying the death records of WW2 casualties from a prosopograhical perspective, provided by the various local military cemeteries where the dead were buried. The idea is to provide the end user with a global visual map view on the places in which the casualties were buried as well as with a local historical perspective on what happened to the casualties that lay within a particular cemetery of a village or town. Plenty of data exists about the Second World War (WW2), but the data is typically archived in unconnected, isolated silos in different organizations. This makes it difficult to track down, visualize, and study information that is contained within multiple distinct datasets. In our work, this problem is solved using aggregated Linked Open Data provided by the WarSampo Data Service and SPARQL endpoint.",
"title": ""
},
{
"docid": "611fdf1451bdd5c683c5be00f46460b8",
"text": "Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.",
"title": ""
},
{
"docid": "ea2e03fc8e273e9d3627086ce4bd6bde",
"text": "Augmented Reality (AR), a concept where the real word is being enhanced with computer generated objects and text, has evolved and become a popular tool to communicate information through. Research on how the technique can be optimized regarding the technical aspects has been made, but not regarding how typography in three dimensions should be designed and used in AR applications. Therefore this master’s thesis investigates three different design attributes of typography in three dimensions. The three attributes are: typeface style, color, and weight including depth, and how they affect the visibility of the text in an indoor AR environment. A user study was conducted, both with regular users but also with users that were considered experts in the field of typography and design, to investigate differences of the visibility regarding the typography’s design attributes. The result shows noteworthy differences between two pairs of AR simulations containing different typography among the regular users. This along with a slight favoritism of bright colored text against dark colored text, even though no notable different could be seen regarding color alone. Discussions regarding the design attributes of the typography affect the legibility of the text, and what could have been done differently to achieve an even more conclusive result. To summarize this thesis, the objective resulted in design guidelines regarding typography for indoor mobile AR applications. Skapande och användande av 3D-typografi i mobila Augmented Reality-applikationer för inomhusbruk",
"title": ""
},
{
"docid": "58f6247a0958bf0087620921c99103b1",
"text": "This paper addresses an information-theoretic aspect of k-means and spectral clustering. First, we revisit the k-means clustering and show that its objective function is approximately derived from the minimum entropy principle when the Renyi's quadratic entropy is used. Then we present a maximum within-clustering association that is derived using a quadratic distance measure in the framework of minimum entropy principle, which is very similar to a class of spectral clustering algorithms that is based on the eigen-decomposition method.",
"title": ""
},
{
"docid": "0857e32201b675c3e971c6caba8d2087",
"text": "Western tonal music relies on a formal geometric structure that determines distance relationships within a harmonic or tonal space. In functional magnetic resonance imaging experiments, we identified an area in the rostromedial prefrontal cortex that tracks activation in tonal space. Different voxels in this area exhibited selectivity for different keys. Within the same set of consistently activated voxels, the topography of tonality selectivity rearranged itself across scanning sessions. The tonality structure was thus maintained as a dynamic topography in cortical areas known to be at a nexus of cognitive, affective, and mnemonic processing.",
"title": ""
},
{
"docid": "6cb0c739d4cb0b8d59f17d2d37cb5caa",
"text": "In this work, a context-based multisensor system, applied for pedestrian detection in urban environment, is presented. The proposed system comprises three main processing modules: (i) a LIDAR-based module acting as primary object detection, (ii) a module which supplies the system with contextual information obtained from a semantic map of the roads, and (iii) an image-based detection module, using sliding-window detectors, with the role of validating the presence of pedestrians in regions of interest (ROIs) generated by the LIDAR module. A Bayesian strategy is used to combine information from sensors on-board the vehicle (‘local’ information) with information contained in a digital map of the roads (‘global’ information). To support experimental analysis, a multisensor dataset, named Laser and Image Pedestrian Detection dataset (LIPD), is used. The LIPD dataset was collected in an urban environment, at day light conditions, using an electrical vehicle driven at low speed. A down sampling method, using support vectors extracted from multiple linear-SVMs, was used to reduce the cardinality of the training set and, as consequence, to decrease the CPU-time during the training process of image-based classifiers. The performance of the system is evaluated, in terms of true positive rate and false positives per frame, using three image-detectors: a linear-SVM, a SVM-cascade, and a benchmark method. Additionally, experiments are performed to assess the impact of contextual information on the performance of the detection system.",
"title": ""
},
{
"docid": "5132cf4fdbe55a47214f66738599df78",
"text": "Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving context awareness is challenging due to data sparsity. We present a novel hierarchical recurrent encoder-decoder architecture that makes possible to account for sequences of previous queries of arbitrary lengths. As a result, our suggestions are sensitive to the order of queries in the context while avoiding data sparsity. Additionally, our model can suggest for rare, or long-tail, queries. The produced suggestions are synthetic and are sampled one word at a time, using computationally cheap decoding techniques. This is in contrast to current synthetic suggestion models relying upon machine learning pipelines and hand-engineered feature sets. Results show that our model outperforms existing context-aware approaches in a next query prediction setting. In addition to query suggestion, our architecture is general enough to be used in a variety of other applications.",
"title": ""
},
{
"docid": "090af7b180f3e9d289d158f8ee385da9",
"text": "Natural medicines were the only option for the prevention and treatment of human diseases for thousands of years. Natural products are important sources for drug development. The amounts of bioactive natural products in natural medicines are always fairly low. Today, it is very crucial to develop effective and selective methods for the extraction and isolation of those bioactive natural products. This paper intends to provide a comprehensive view of a variety of methods used in the extraction and isolation of natural products. This paper also presents the advantage, disadvantage and practical examples of conventional and modern techniques involved in natural products research.",
"title": ""
},
{
"docid": "78e8f84224549b75584c59591a8febef",
"text": "Our goal is to design architectures that retain the groundbreaking performance of Convolutional Neural Networks (CNNs) for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. (e) We further provide additional results for the problem of facial part segmentation. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks.",
"title": ""
},
{
"docid": "c9a18fc3919462cc232b0840a4844ae2",
"text": "Systematic gene expression analyses provide comprehensive information about the transcriptional response to different environmental and developmental conditions. With enough gene expression data points, computational biologists may eventually generate predictive computer models of transcription regulation. Such models will require computational methodologies consistent with the behavior of known biological systems that remain tractable. We represent regulatory relationships between genes as linear coefficients or weights, with the \"net\" regulation influence on a gene's expression being the mathematical summation of the independent regulatory inputs. Test regulatory networks generated with this approach display stable and cyclically stable gene expression levels, consistent with known biological systems. We include variables to model the effect of environmental conditions on transcription regulation and observed various alterations in gene expression patterns in response to environmental input. Finally, we use a derivation of this model system to predict the regulatory network from simulated input/output data sets and find that it accurately predicts all components of the model, even with noisy expression data.",
"title": ""
},
{
"docid": "388101f40ff79f2543b111aad96c4180",
"text": "Based on available literature, ecology and economy of light emitting diode (LED) lights in plant foods production were assessed and compared to high pressure sodium (HPS) and compact fluorescent light (CFL) lamps. The assessment summarises that LEDs are superior compared to other lamp types. LEDs are ideal in luminous efficiency, life span and electricity usage. Mercury, carbon dioxide and heat emissions are also lowest in comparison to HPS and CFL lamps. This indicates that LEDs are indeed economic and eco-friendly lighting devices. The present review indicates also that LEDs have many practical benefits compared to other lamp types. In addition, they are applicable in many purposes in plant foods production. The main focus of the review is the targeted use of LEDs in order to enrich phytochemicals in plants. This is an expedient to massive improvement in production efficiency, since it diminishes the number of plants per phytochemical unit. Consequently, any other production costs (e.g. growing space, water, nutrient and transport) may be reduced markedly. Finally, 24 research articles published between 2013 and 2017 were reviewed for targeted use of LEDs in the specific, i.e. blue range (400-500 nm) of spectrum. The articles indicate that blue light is efficient in enhancing the accumulation of health beneficial phytochemicals in various species. The finding is important for global food production. © 2017 Society of Chemical Industry.",
"title": ""
},
{
"docid": "ad091e4f66adb26d36abfc40377ee6ab",
"text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.",
"title": ""
},
{
"docid": "60c36aa871aaa3a13ac3b51dbb12b668",
"text": "We propose a novel approach for multi-view object detection in 3D scenes reconstructed from RGB-D sensor. We utilize shape based representation using local shape context descriptors along with the voting strategy which is supported by unsupervised object proposals generated from 3D point cloud data. Our algorithm starts with a single-view object detection where object proposals generated in 3D space are combined with object specific hypotheses generated by the voting strategy. To tackle the multi-view setting, the data association between multiple views enabled view registration and 3D object proposals. The evidence from multiple views is combined in simple bayesian setting. The approach is evaluated on the Washington RGB-D scenes datasets [1], [2] containing several classes of objects in a table top setting. We evaluated our approach against the other state-of-the-art methods and demonstrated superior performance on the same dataset.",
"title": ""
},
{
"docid": "2214493b373886c02f67ad9e411cfe66",
"text": "We identify emerging phenomena of distributed liveness, involving new relationships among performers, audiences, and technology. Liveness is a recent, technology-based construct, which refers to experiencing an event in real-time with the possibility for shared social realities. Distributed liveness entails multiple forms of physical, spatial, and social co-presence between performers and audiences across physical and virtual spaces. We interviewed expert performers about how they experience liveness in physically co-present and distributed settings. Findings show that distributed performances and technology need to support flexible social co-presence and new methods for sensing subtle audience responses and conveying engagement abstractly.",
"title": ""
},
{
"docid": "eebca83626e8568e8b92019541466873",
"text": "There is a need for new spectrum access protocols that are opportunistic, flexible and efficient, yet fair. Game theory provides a framework for analyzing spectrum access, a problem that involves complex distributed decisions by independent spectrum users. We develop a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum. We show that in high interference environments, the utility space of the game is non-convex, which may make some optimal allocations unachievable with pure strategies. However, we show that as the number of channels available increases, the utility space becomes close to convex and thus optimal allocations become achievable with pure strategies. We propose the use of the Nash Bargaining Solution and show that it achieves a good compromise between fairness and efficiency, using a small number of channels. Finally, we propose a distributed algorithm for spectrum sharing and show that it achieves allocations reasonably close to the Nash Bargaining Solution.",
"title": ""
},
{
"docid": "c7eca96393cfd88bda265fb9bcaa4630",
"text": "According to the World Health Organization, around 28–35% of people aged 65 and older fall each year. This number increases to around 32–42% for people over 70 years old. For this reason, this research targets the exploration of the role of Convolutional Neural Networks(CNN) in human fall detection. There are a number of current solutions related to fall detection; however, remain low detection accuracy. Although CNN has proven a powerful technique for image recognition problems, and the CNN library in Matlab was designed to work with either images or matrices, this research explored how to apply CNN to streaming sensor data, collected from Body Sensor Networks (BSN), in order to improve the fall detection accuracy. The idea of this research is that given the stream data sets as input, we converted them into images before applying CNN. The final accuracy result achieved is, to the best of our knowledge, the highest compared to other proposed methods: 92.3%.",
"title": ""
},
{
"docid": "1b11df93de6688a4176a7ad88232918a",
"text": "Classification of data is difficult if the data is imbalanced and classes are overlapping. In recent years, more research has started to focus on classification of imbalanced data since real world data is often skewed. Traditional methods are more successful with classifying the class that has the most samples (majority class) compared to the other classes (minority classes). For the classification of imbalanced data sets, different methods are available, although each has some advantages and shortcomings. In this study, we propose a new hierarchical decomposition method for imbalanced data sets which is different from previously proposed solutions to the class imbalance problem. Additionally, it does not require any data pre-processing step as many other solutions need. The new method is based on clustering and outlier detection. The hierarchy is constructed using the similarity of labeled data subsets at each level of the hierarchy with different levels being built by different data and feature subsets. Clustering is used to partition the data while outlier detection is utilized to detect minority class samples. The comparison of the proposed method with state of art the methods using 20 public imbalanced data sets and 181 synthetic data sets showed that the proposed method’s classification performance is better than the state of art methods. It is especially successful if the minority class is sparser than the majority class. It has accurate performance even when classes have sub-varieties and minority and majority classes are overlapping. Moreover, its performance is also good when the class imbalance ratio is low, i.e. classes are more imbalanced.",
"title": ""
},
{
"docid": "dc867c305130e728aaaa00fef5b8b688",
"text": "Large scale surveillance video analysis is one of the most important components in the future artificial intelligent city. It is a very challenging but practical system, consists of multiple functionalities such as object detection, tracking, identification and behavior analysis. In this paper, we try to address three tasks hosted in NVIDIA AI City Challenge contest. First, a system that transforming the image coordinate to world coordinate has been proposed, which is useful to estimate the vehicle speed on the road. Second, anomalies like car crash event and stalled vehicles can be found by the proposed anomaly detector framework. Third, multiple camera vehicle re-identification problem has been investigated and a matching algorithm is explained. All these tasks are based on our proposed online single camera multiple object tracking (MOT) system, which has been evaluated on the widely used MOT16 challenge benchmark. We show that it achieves the best performance compared to the state-of-the-art methods. Besides of MOT, we evaluate the proposed vehicle re-identification model on VeRi-776 dataset and it outperforms all other methods with a large margin.",
"title": ""
},
{
"docid": "bbecbf907a81e988379fe61d8d8f9f17",
"text": "In this paper, we address the problem of visual question answering by proposing a novel model, called VIBIKNet. Our model is based on integrating Kernelized Convolutional Neural Networks and Long-Short Term Memory units to generate an answer given a question about an image. We prove that VIBIKNet is an optimal trade-off between accuracy and computational load, in terms of memory and time consumption. We validate our method on the VQA challenge dataset and compare it to the top performing methods in order to illustrate its performance and speed.",
"title": ""
},
{
"docid": "7eb278200f80d5827b94cada79e54ac2",
"text": "Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster–Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions.",
"title": ""
}
] |
scidocsrr
|
290db40768e847f187e056e0fa70c177
|
A Pattern-Based Approach for Multi-Class Sentiment Analysis in Twitter
|
[
{
"docid": "6a96678b14ec12cb4bb3db4e1c4c6d4e",
"text": "Emoticons are widely used to express positive or negative sentiment on Twitter. We report on a study with live users to determine whether emoticons are used to merely emphasize the sentiment of tweets, or whether they are the main elements carrying the sentiment. We found that the sentiment of an emoticon is in substantial agreement with the sentiment of the entire tweet. Thus, emoticons are useful as predictors of tweet sentiment and should not be ignored in sentiment classification. However, the sentiment expressed by an emoticon agrees with the sentiment of the accompanying text only slightly better than random. Thus, using the text accompanying emoticons to train sentiment models is not likely to produce the best results, a fact that we show by comparing lexicons generated using emoticons with others generated using simple textual features.",
"title": ""
},
{
"docid": "e59d1a3936f880233001eb086032d927",
"text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.",
"title": ""
},
{
"docid": "6f4479d224c1546040bee39d50eaba55",
"text": "Bag-of-words (BOW) is now the most popular way to model text in statistical machine learning approaches in sentiment analysis. However, the performance of BOW sometimes remains limited due to some fundamental deficiencies in handling the polarity shift problem. We propose a model called dual sentiment analysis (DSA), to address this problem for sentiment classification. We first propose a novel data expansion technique by creating a sentiment-reversed review for each training and test review. On this basis, we propose a dual training algorithm to make use of original and reversed training reviews in pairs for learning a sentiment classifier, and a dual prediction algorithm to classify the test reviews by considering two sides of one review. We also extend the DSA framework from polarity (positive-negative) classification to 3-class (positive-negative-neutral) classification, by taking the neutral reviews into consideration. Finally, we develop a corpus-based method to construct a pseudo-antonym dictionary, which removes DSA's dependency on an external antonym dictionary for review reversion. We conduct a wide range of experiments including two tasks, nine datasets, two antonym dictionaries, three classification algorithms, and two types of features. The results demonstrate the effectiveness of DSA in supervised sentiment classification.",
"title": ""
}
] |
[
{
"docid": "0f11d0d1047a79ee63896f382ae03078",
"text": "Much of the visual cortex is organized into visual field maps: nearby neurons have receptive fields at nearby locations in the image. Mammalian species generally have multiple visual field maps with each species having similar, but not identical, maps. The introduction of functional magnetic resonance imaging made it possible to identify visual field maps in human cortex, including several near (1) medial occipital (V1,V2,V3), (2) lateral occipital (LO-1,LO-2, hMT+), (3) ventral occipital (hV4, VO-1, VO-2), (4) dorsal occipital (V3A, V3B), and (5) posterior parietal cortex (IPS-0 to IPS-4). Evidence is accumulating for additional maps, including some in the frontal lobe. Cortical maps are arranged into clusters in which several maps have parallel eccentricity representations, while the angular representations within a cluster alternate in visual field sign. Visual field maps have been linked to functional and perceptual properties of the visual system at various spatial scales, ranging from the level of individual maps to map clusters to dorsal-ventral streams. We survey recent measurements of human visual field maps, describe hypotheses about the function and relationships between maps, and consider methods to improve map measurements and characterize the response properties of neurons comprising these maps.",
"title": ""
},
{
"docid": "d4954bab5fc4988141c509a6d6ab79db",
"text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.",
"title": ""
},
{
"docid": "62d23e00d13903246cc7128fe45adf12",
"text": "The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.)",
"title": ""
},
{
"docid": "8fd97add7e3b48bad9fd82dc01422e59",
"text": "Anaerobic nitrate-dependent Fe(II) oxidation is widespread in various environments and is known to be performed by both heterotrophic and autotrophic microorganisms. Although Fe(II) oxidation is predominantly biological under acidic conditions, to date most of the studies on nitrate-dependent Fe(II) oxidation were from environments of circumneutral pH. The present study was conducted in Lake Grosse Fuchskuhle, a moderately acidic ecosystem receiving humic acids from an adjacent bog, with the objective of identifying, characterizing and enumerating the microorganisms responsible for this process. The incubations of sediment under chemolithotrophic nitrate-dependent Fe(II)-oxidizing conditions have shown the enrichment of TM3 group of uncultured Actinobacteria. A time-course experiment done on these Actinobacteria showed a consumption of Fe(II) and nitrate in accordance with the expected stoichiometry (1:0.2) required for nitrate-dependent Fe(II) oxidation. Quantifications done by most probable number showed the presence of 1 × 104 autotrophic and 1 × 107 heterotrophic nitrate-dependent Fe(II) oxidizers per gram fresh weight of sediment. The analysis of microbial community by 16S rRNA gene amplicon pyrosequencing showed that these actinobacterial sequences correspond to ∼0.6% of bacterial 16S rRNA gene sequences. Stable isotope probing using 13CO2 was performed with the lake sediment and showed labeling of these Actinobacteria. This indicated that they might be important autotrophs in this environment. Although these Actinobacteria are not dominant members of the sediment microbial community, they could be of functional significance due to their contribution to the regeneration of Fe(III), which has a critical role as an electron acceptor for anaerobic microorganisms mineralizing sediment organic matter. To the best of our knowledge this is the first study to show the autotrophic nitrate-dependent Fe(II)-oxidizing nature of TM3 group of uncultured Actinobacteria.",
"title": ""
},
{
"docid": "60e06e3eebafa9070eecf1ab1e9654f8",
"text": "In most enterprises, databases are deployed on dedicated database servers. Often, these servers are underutilized much of the time. For example, in traces from almost 200 production servers from different organizations, we see an average CPU utilization of less than 4%. This unused capacity can be potentially harnessed to consolidate multiple databases on fewer machines, reducing hardware and operational costs. Virtual machine (VM) technology is one popular way to approach this problem. However, as we demonstrate in this paper, VMs fail to adequately support database consolidation, because databases place a unique and challenging set of demands on hardware resources, which are not well-suited to the assumptions made by VM-based consolidation.\n Instead, our system for database consolidation, named Kairos, uses novel techniques to measure the hardware requirements of database workloads, as well as models to predict the combined resource utilization of those workloads. We formalize the consolidation problem as a non-linear optimization program, aiming to minimize the number of servers and balance load, while achieving near-zero performance degradation. We compare Kairos against virtual machines, showing up to a factor of 12× higher throughput on a TPC-C-like benchmark. We also tested the effectiveness of our approach on real-world data collected from production servers at Wikia.com, Wikipedia, Second Life, and MIT CSAIL, showing absolute consolidation ratios ranging between 5.5:1 and 17:1.",
"title": ""
},
{
"docid": "c85c3ef7100714d6d08f726aa8768bb9",
"text": "An adaptive Kalman filter algorithm is adopted to estimate the state of charge (SOC) of a lithium-ion battery for application in electric vehicles (EVs). Generally, the Kalman filter algorithm is selected to dynamically estimate the SOC. However, it easily causes divergence due to the uncertainty of the battery model and system noise. To obtain a better convergent and robust result, an adaptive Kalman filter algorithm that can greatly improve the dependence of the traditional filter algorithm on the battery model is employed. In this paper, the typical characteristics of the lithium-ion battery are analyzed by experiment, such as hysteresis, polarization, Coulomb efficiency, etc. In addition, an improved Thevenin battery model is achieved by adding an extra RC branch to the Thevenin model, and model parameters are identified by using the extended Kalman filter (EKF) algorithm. Further, an adaptive EKF (AEKF) algorithm is adopted to the SOC estimation of the lithium-ion battery. Finally, the proposed method is evaluated by experiments with federal urban driving schedules. The proposed SOC estimation using AEKF is more accurate and reliable than that using EKF. The comparison shows that the maximum SOC estimation error decreases from 14.96% to 2.54% and that the mean SOC estimation error reduces from 3.19% to 1.06%.",
"title": ""
},
{
"docid": "cc7a9ea0641544182f2d56e7414617c3",
"text": "Findings showed that the nonconscious activation of a goal in memory led to increased positive implicit attitudes toward stimuli that could facilitate the goal. This evaluative readiness to pursue the nonconscious goal emerged even when participants were consciously unaware of the goal-relevant stimuli. The effect emerged the most strongly for those with some skill at the goal and for those for whom the goal was most currently important. The effect of implicit goal activation on implicit attitudes emerged in both an immediate condition as well as a delay condition, suggesting that a goal rather than a nonmotivational construct was activated. Participants' implicit attitudes toward a nonconscious goal also predicted their goal-relevant behavior. These findings suggest that people can become evaluatively ready to pursue a goal whenever it has been activated--a readiness that apparently does not require conscious awareness or deliberation about either the goal or the goal-relevant stimuli. Theoretical implications of this type of implicit goal readiness are discussed.",
"title": ""
},
{
"docid": "367782d15691c3c1dfd25220643752f0",
"text": "Music streaming services increasingly incorporate additional music taxonomies (i.e., mood, activity, and genre) to provide users different ways to browse through music collections. However, these additional taxonomies can distract the user from reaching their music goal, and influence choice satisfaction. We conducted an online user study with an application called \"Tune-A-Find,\" where we measured participants' music taxonomy choice (mood, activity, and genre). Among 297 participants, we found that the chosen taxonomy is related to personality traits. We found that openness to experience increased the choice for browsing music by mood, while conscientiousness increased the choice for browsing music by activity. In addition, those high in neuroticism were most likely to browse for music by activity or genre. Our findings can support music streaming services to further personalize user interfaces. By knowing the user's personality, the user interface can adapt to the user's preferred way of music browsing.",
"title": ""
},
{
"docid": "e13fc2c9f5aafc6c8eb1909592c07a70",
"text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].",
"title": ""
},
{
"docid": "4451f35b38f0b3af0ff006d8995b0265",
"text": "Social media together with still growing social media communities has become a powerful and promising solution in crisis and emergency management. Previous crisis events have proved that social media and mobile technologies used by citizens (widely) and public services (to some extent) have contributed to the post-crisis relief efforts. The iSAR+ EU FP7 project aims at providing solutions empowering citizens and PPDR (Public Protection and Disaster Relief) organizations in online and mobile communications for the purpose of crisis management especially in search and rescue operations. This paper presents the results of survey aiming at identification of preliminary end-user requirements in the close interworking with end-users across Europe.",
"title": ""
},
{
"docid": "be7cc41f9e8d3c9e08c5c5ff1ea79f59",
"text": "A person’s emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: “The face is the portrait of the mind; the eyes, its informers.”. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and",
"title": ""
},
{
"docid": "4a609cf0c9f862f1c20155b239629b90",
"text": "Intuitive access to information in habitual real-world environments is a challenge for information technology. An important question is how can we enhance established and well-functioning everyday environments rather than replace them by virtual environments (VEs)? Augmented reality (AR) technology has a lot of potential in this respect because it augments real-world environments with computer-generated imagery. Today, most AR systems use see-through head-mounted displays, which share most of the disadvantages of other head-attached display devices.",
"title": ""
},
{
"docid": "f174469e907b60cd481da6b42bafa5f9",
"text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.",
"title": ""
},
{
"docid": "17055a66f80354bf5a614a510a4ef689",
"text": "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.",
"title": ""
},
{
"docid": "70f672268ae0b3e0e344a4f515057e6b",
"text": "Murder-suicide, homicide-suicide, and dyadic death all refer to an incident where a homicide is committed followed by the perpetrator's suicide almost immediately or soon after the homicide. Homicide-suicides are relatively uncommon and vary from region to region. In the selected literature that we reviewed, shooting was the common method of killing and suicide, and only 3 cases of homicidal hanging involving child victims were identified. We present a case of dyadic death where the method of killing and suicide was hanging, and the victim was a young woman.",
"title": ""
},
{
"docid": "019854be19420ba5e6badcd9adbb7dea",
"text": "We present a new shared-memory parallel algorithm and implementation called FASCIA for the problems of approximate sub graph counting and sub graph enumeration. The problem of sub graph counting refers to determining the frequency of occurrence of a given sub graph (or template) within a large network. This is a key graph analytic with applications in various domains. In bioinformatics, sub graph counting is used to detect and characterize local structure (motifs) in protein interaction networks. Exhaustive enumeration and exact counting is extremely compute-intensive, with running time growing exponentially with the number of vertices in the template. In this work, we apply the color coding technique to determine approximate counts of non-induced occurrences of the sub graph in the original network. Color coding gives a fixed-parameter algorithm for this problem, using a dynamic programming-based counting approach. Our new contributions are a multilevel shared-memory parallelization of the counting scheme and several optimizations to reduce the memory footprint. We show that approximate counts can be obtained for templates with up to 12 vertices, on networks with up to millions of vertices and edges. Prior work on this problem has only considered out-of-core parallelization on distributed platforms. With our new counting scheme, data layout optimizations, and multicore parallelism, we demonstrate a significant speedup over the current state-of-the-art for sub graph counting.",
"title": ""
},
{
"docid": "ff24e5e100d26c9de2bde8ae8cd7fec4",
"text": "The Global Positioning System (GPS) grows into a ubiquitous utility that provides positioning, navigation, and timing (PNT) services. As an essential element of the global information infrastructure, cyber security of GPS faces serious challenges. Some mission-critical systems even rely on GPS as a security measure. However, civilian GPS itself has no protection against malicious acts such as spoofing. GPS spoofing breaches authentication by forging satellite signals to mislead users with wrong location/timing data that threatens homeland security. In order to make civilian GPS secure and resilient for diverse applications, we must understand the nature of attacks. This paper proposes a novel attack modeling of GPS spoofing with event-driven simulation package. Simulation supplements usual experiments to limit incidental harms and to comprehend a surreptitious scenario. We also provide taxonomy of GPS spoofing through characterization. The work accelerates the development of defense technology against GPS-based attacks.",
"title": ""
},
{
"docid": "411d3048bd13f48f0c31259c41ff2903",
"text": "In computer vision, object detection is addressed as one of the most challenging problems as it is prone to localization and classification error. The current best-performing detectors are based on the technique of finding region proposals in order to localize objects. Despite having very good performance, these techniques are computationally expensive due to having large number of proposed regions. In this paper, we develop a high-confidence region-based object detection framework that boosts up the classification performance with less computational burden. In order to formulate our framework, we consider a deep network that activates the semantically meaningful regions in order to localize objects. These activated regions are used as input to a convolutional neural network (CNN) to extract deep features. With these features, we train a set of class-specific binary classifiers to predict the object labels. Our new region-based detection technique significantly reduces the computational complexity and improves the performance in object detection. We perform rigorous experiments on PASCAL, SUN, MIT-67 Indoor and MSRC datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in recognizing objects.",
"title": ""
},
{
"docid": "5bb98a6655f823b38c3866e6d95471e9",
"text": "This article describes the HR Management System in place at Sears. Key emphases of Sears' HR management infrastructure include : (1) formulating and communicating a corporate mission, vision, and goals, (2) employee education and development through the Sears University, (3) performance management and incentive compensation systems linked closely to the firm's strategy, (4) validated employee selection systems, and (5) delivering the \"HR Basics\" very competently. Key challenges for the future include : (1) maintaining momentum in the performance improvement process, (2) identifying barriers to success, and (3) clearly articulating HR's role in the change management process . © 1999 John Wiley & Sons, Inc .",
"title": ""
}
] |
scidocsrr
|
1e92c8fb3c8b1435d830daeb255d9f41
|
DISTRIBUTED TRAINING
|
[
{
"docid": "f2334ce1d717a8f6e91771f95a00b46e",
"text": "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1.",
"title": ""
},
{
"docid": "6fdb3ae03e6443765c72197eb032f4a0",
"text": "This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of environmental variability are introduced by the use of desk-top microphones and different training and testing conditions: additive noise and spectral tilt introduced by linear filtering. An important attribute of the novel compensation algorithms described in this thesis is that they provide joint rather than independent compensation for these two types of degradation. Acoustical compensation is applied in our algorithms as an additive correction in the cepstral domain. This allows a higher degree of integration within SPHINX, the Carnegie Mellon speech recognition system, that uses the cepstrum as its feature vector. Therefore, these algorithms can be implemented very efficiently. Processing in many of these algorithms is based on instantaneous signal-to-noise ratio (SNR), as the appropriate compensation represents a form of noise suppression at low SNRs and spectral equalization at high SNRs. The compensation vectors for additive noise and spectral transformations are estimated by minimizing the differences between speech feature vectors obtained from a \"standard\" training corpus of speech and feature vectors that represent the current acoustical environment. In our work this is accomplished by a minimizing the distortion of vector-quantized cepstra that are produced by the feature extraction module in SPHINX. In this dissertation we describe several algorithms including the SNR-Dependent Cepstral Normalization, (SDCN) and the Codeword-Dependent Cepstral Normalization (CDCN). With CDCN, the accuracy of SPHINX when trained on speech recorded with a close-talking microphone and tested on speech recorded with a desk-top microphone is essentially the same obtained when the system is trained and tested on speech from the desk-top microphone. An algorithm for frequency normalization has also been proposed in which the parameter of the bilinear transformation that is used by the signal-processing stage to produce frequency warping is adjusted for each new speaker and acoustical environment. The optimum value of this parameter is again chosen to minimize the vector-quantization distortion between the standard environment and the current one. In preliminary studies, use of this frequency normalization produced a moderate additional decrease in the observed error rate.",
"title": ""
}
] |
[
{
"docid": "538f1b131a9803db07ab20f202ecc96e",
"text": "In this paper, we propose a direction-of-arrival (DOA) estimation method by combining multiple signal classification (MUSIC) of two decomposed linear arrays for the corresponding coprime array signal processing. The title “DECOM” means that, first, the nonlinear coprime array needs to be DECOMposed into two linear arrays, and second, Doa Estimation is obtained by COmbining the MUSIC results of the linear arrays, where the existence and uniqueness of the solution are proved. To reduce the computational complexity of DECOM, we design a two-phase adaptive spectrum search scheme, which includes a coarse spectrum search phase and then a fine spectrum search phase. Extensive simulations have been conducted and the results show that the DECOM can achieve accurate DOA estimation under different SNR conditions.",
"title": ""
},
{
"docid": "6df12ee53551f4a3bd03bca4ca545bf1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
},
{
"docid": "ecda448df7b28ea5e453c179206e91a4",
"text": "The cloud infrastructure provider (CIP) in a cloud computing platform must provide security and isolation guarantees to a service provider (SP), who builds the service(s) for such a platform. We identify last level cache (LLC) sharing as one of the impediments to finer grain isolation required by a service, and advocate two resource management approaches to provide performance and security isolation in the shared cloud infrastructure - cache hierarchy aware core assignment and page coloring based cache partitioning. Experimental results demonstrate that these approaches are effective in isolating cache interference impacts a VM may have on another VM. We also incorporate these approaches in the resource management (RM) framework of our example cloud infrastructure, which enables the deployment of VMs with isolation enhanced SLAs.",
"title": ""
},
{
"docid": "d130c6eed44a863e8c8e3bb9c392eb32",
"text": "This study presents narrow-band measurements of the mobile vehicle-to-vehicle propagation channel at 5.9 GHz, under realistic suburban driving conditions in Pittsburgh, Pennsylvania. Our system includes differential Global Positioning System (DGPS) receivers, thereby enabling dynamic measurements of how large-scale path loss, Doppler spectrum, and coherence time depend on vehicle location and separation. A Nakagami distribution is used for describing the fading statistics. The speed-separation diagram is introduced as a new tool for analyzing and understanding the vehicle-to-vehicle propagation environment. We show that this diagram can be used to model and predict channel Doppler spread and coherence time using vehicle speed and separation.",
"title": ""
},
{
"docid": "43c3d477fdadea837f74897facf496e4",
"text": "Aerial robots provide valuable support in several high-risk scenarios thanks to their capability to quickly fly to locations dangerous or even inaccessible to humans. In order to fully benefit from these features, aerial robots should be easy to transport and rapid to deploy. With this aim, this paper focuses on the development of a novel pocket sized quadrotor with foldable arms. The quadrotor can be packaged for transportation by folding its arms around the main frame. Before flight, the quadrotor's arms self-deploy in 0.3 seconds thanks to the torque generated by the propellers. The paper describes the design strategies used for developing lightweight, stiff and self-deployable foldable arms for miniature quadrotors. The arms are manufactured according to an origami technique with a foldable multi-layer material. A prototype of the quadrotor is presented as a proof of concept and performance of the system is assessed.",
"title": ""
},
{
"docid": "1ace2a8a8c6b4274ac0891e711d13190",
"text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.",
"title": ""
},
{
"docid": "e0edda10185fcf75428d371116f37213",
"text": "Building upon self-regulated learning theories, we examined the nature of student writing goals and the relationship of these writing goals to revision alone and in combination with two other important sources of students’ self-regulated revision—peer comments on their writing, and reflections for their own writing obtained from reviewing others’ writing. Data were obtained from a large introductory undergraduate class in the context of two 1000-word writing assignments involving online peer review and a required revision. We began with an investigation of students’ free response learning goals and a follow-up quantitative survey about the nature and structure of these writing goals. We found that: (a) students tended to create high-level substantive goals more often, (b) students change their writing goals across papers even for a very similar assignment, and (c) their writing goals divide into three dimensions: general writing goals, genre writing goals, and assignment goals. We then closely coded and analyzed the relative levels of association of revision changes with writing goals, peer comments, reflections from peer review, and combinations of these sources. Findings suggest that high-level revisions are commonly associated with writing goals, are especially likely to occur for combinations of the three sources, and peer comments alone appeared to make the largest contributions to revision.",
"title": ""
},
{
"docid": "341b6ae3f5cf08b89fb573522ceeaba1",
"text": "Neural parsers have benefited from automatically labeled data via dependencycontext word embeddings. We investigate training character embeddings on a word-based context in a similar way, showing that the simple method significantly improves state-of-the-art neural word segmentation models, beating tritraining baselines for leveraging autosegmented data.",
"title": ""
},
{
"docid": "f028bf7bbaa4d182013771e9079b5e21",
"text": "Hepatoblastoma (HB), a primary liver tumor in childhood, is often accompanied by alpha-fetoprotein (AFP) secretion, and sometimes by β-human chorionic gonadotropin hormone (β-hCG) secretion, and this can cause peripheral precocious puberty (PPP). We describe a case of PPP associated with HB. Laboratory tests showed an increase in AFP, β-hCG and testosterone values, and suppression of follicle-stimulating hormone and luteinizing hormone levels. After chemotherapy and surgery, AFP, β-hCG and testosterone levels normalized and signs of virilization did not progress further. The child did not show evidence for tumor recurrence after 16 months of follow-up. New therapeutic approaches and early diagnosis may ensure a better prognosis of virilizing HB, than reported in the past. Assessment of PPP should always take into account the possibility of a tumoral source.",
"title": ""
},
{
"docid": "18beb6ddcc1c8bb3e45dbd56b34a8776",
"text": "This paper discusses the minimization of the line- and motor-side harmonics in a high power current source drive system. The proposed control achieves speed regulation over the entire speed range with enhanced transient performance and minimal harmonic distortion of the line and motor currents while limiting the switching frequency of current source converters to a maximum of 540 Hz. To minimize the motor current harmonic distortion, space vector modulation (SVM) and selective harmonic elimination (SHE) schemes are optimally implemented according to different drive operating conditions. In order to suppress line- side resonant harmonics, an active damping method using a combination of a virtual harmonic resistor and a three-step modulation signal regulator is employed. The performance of the proposed current source drive is verified by simulation for a 1 MVA system and experiments on a 10 kVA gate-commutated thyristor (GCT) based laboratory drive system.",
"title": ""
},
{
"docid": "648a1ff0ad5b2742ff54460555287c84",
"text": "In the European academic and institutional debate, interoperability is predominantly seen as a means to enable public administrations to collaborate within Members State and across borders. The article presents a conceptual framework for ICT-enabled governance and analyses the role of interoperability in this regard. The article makes a specific reference to the exploratory research project carried out by the Information Society Unit of the Institute for Prospective Technological Studies (IPTS) of the European Commission’s Joint Research Centre on emerging ICT-enabled governance models in EU cities (EXPGOV). The aim of this project is to study the interplay between ICTs and governance processes at city level and formulate an interdisciplinary framework to assess the various dynamics emerging from the application of ICT-enabled service innovations in European cities. In this regard, the conceptual framework proposed in this article results from an action research perspective and investigation of e-governance experiences carried out in Europe. It aims to elicit the main value drivers that should orient how interoperable systems are implemented, considering the reciprocal influences that occur between these systems and different governance models in their specific context.",
"title": ""
},
{
"docid": "6379d5330037a774f9ceed4c51bda1f6",
"text": "Despite long-standing observations on diverse cytokinin actions, the discovery path to cytokinin signaling mechanisms was tortuous. Unyielding to conventional genetic screens, experimental innovations were paramount in unraveling the core cytokinin signaling circuitry, which employs a large repertoire of genes with overlapping and specific functions. The canonical two-component transcription circuitry involves His kinases that perceive cytokinin and initiate signaling, as well as His-to-Asp phosphorelay proteins that transfer phosphoryl groups to response regulators, transcriptional activators, or repressors. Recent advances have revealed the complex physiological functions of cytokinins, including interactions with auxin and other signal transduction pathways. This review begins by outlining the historical path to cytokinin discovery and then elucidates the diverse cytokinin functions and key signaling components. Highlights focus on the integration of cytokinin signaling components into regulatory networks in specific contexts, ranging from molecular, cellular, and developmental regulations in the embryo, root apical meristem, shoot apical meristem, stem and root vasculature, and nodule organogenesis to organismal responses underlying immunity, stress tolerance, and senescence.",
"title": ""
},
{
"docid": "6b97884f9bc253e1291d816d38608093",
"text": "The World Health Organization (WHO) is currently updating the tenth version of their diagnostic tool, the International Classification of Diseases (ICD, WHO, 1992). Changes have been proposed for the diagnosis of Transsexualism (ICD-10) with regard to terminology, placement and content. The aim of this study was to gather the opinions of transgender individuals (and their relatives/partners) and clinicians in the Netherlands, Flanders (Belgium) and the United Kingdom regarding the proposed changes and the clinical applicability and utility of the ICD-11 criteria of 'Gender Incongruence of Adolescence and Adulthood' (GIAA). A total of 628 participants were included in the study: 284 from the Netherlands (45.2%), 8 from Flanders (Belgium) (1.3%), and 336 (53.5%) from the UK. Most participants were transgender people (or their partners/relatives) (n = 522), 89 participants were healthcare providers (HCPs) and 17 were both healthcare providers and (partners/relatives of) transgender people. Participants completed an online survey developed for this study. Most participants were in favor of the proposed diagnostic term of 'Gender Incongruence' and thought that this was an improvement on the ICD-10 diagnostic term of 'Transsexualism'. Placement in a separate chapter dealing with Sexual- and Gender-related Health or as a Z-code was preferred by many and only a small number of participants stated that this diagnosis should be excluded from the ICD-11. In the UK, most transgender participants thought there should be a diagnosis related to being trans. However, if it were to be removed from the chapter on \"psychiatric disorders\", many transgender respondents indicated that they would prefer it to be removed from the ICD in its entirety. There were no large differences between the responses of the transgender participants (or their partners and relatives) and HCPs. HCPs were generally positive about the GIAA diagnosis; most thought the diagnosis was clearly defined and easy to use in their practice or work. The duration of gender incongruence (several months) was seen by many as too short and required a clearer definition. If the new diagnostic term of GIAA is retained, it should not be stigmatizing to individuals. Moving this diagnosis away from the mental and behavioral chapter was generally supported. Access to healthcare was one area where retaining a diagnosis seemed to be of benefit.",
"title": ""
},
{
"docid": "f7ba998d8f4eb51619673edb66f7b3e3",
"text": "We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation defined from inferred graph translations. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.",
"title": ""
},
{
"docid": "ac1d1bf198a178cb5655768392c3d224",
"text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.",
"title": ""
},
{
"docid": "b492c624d1593515d55b3d9b6ac127a7",
"text": "We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.",
"title": ""
},
{
"docid": "40dc2dc28dca47137b973757cdf3bf34",
"text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.",
"title": ""
},
{
"docid": "74e2fc764e93b5678a3d17cbca436c9f",
"text": "B cells have a fundamental role in the pathogenesis of various autoimmune neurological disorders, not only as precursors of antibody-producing cells, but also as important regulators of the T-cell activation process through their participation in antigen presentation, cytokine production, and formation of ectopic germinal centers in the intermeningeal spaces. Two B-cell trophic factors—BAFF (B-cell-activating factor) and APRIL (a proliferation-inducing ligand)—and their receptors are strongly upregulated in many immunological disorders of the CNS and PNS, and these molecules contribute to clonal expansion of B cells in situ. The availability of monoclonal antibodies or fusion proteins against B-cell surface molecules and trophic factors provides a rational approach to the treatment of autoimmune neurological diseases. This article reviews the role of B cells in autoimmune neurological disorders and summarizes the experience to date with rituximab, a B-cell-depleting monoclonal antibody against CD20, for the treatment of relapsing–remitting multiple sclerosis, autoimmune neuropathies, neuromyelitis optica, paraneoplastic neurological disorders, myasthenia gravis, and inflammatory myopathies. It is expected that ongoing controlled trials will establish the efficacy and long-term safety profile of anti-B-cell agents in several autoimmune neurological disorders, as well as exploring the possibility of a safe and synergistic effect with other immunosuppressants or immunomodulators.",
"title": ""
},
{
"docid": "55bdb8b6f4dd3dc836e9751ae8d721e3",
"text": "Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "bf079d5c13d37a57e835856df572a306",
"text": "Paraphrase Detection is the task of examining if two sentences convey the same meaning or not. Here, in this paper, we have chosen a sentence embedding by unsupervised RAE vectors for capturing syntactic as well as semantic information. The RAEs learn features from the nodes of the parse tree and chunk information along with unsupervised word embedding. These learnt features are used for measuring phrase wise similarity between two sentences. Since sentences are of varying length, we use dynamic pooling for getting a fixed sized representation for sentences. This fixed sized sentence representation is the input to the classifier. The DPIL (Detecting Paraphrases in Indian Languages) dataset is used for paraphrase identification here. Initially, paraphrase identification is defined as a 2-class problem and then later, it is extended to a 3-class problem. Word2vec and Glove embedding techniques producing 100, 200 and 300 dimensional vectors are used to check variation in accuracies. The baseline system accuracy obtained using word2vec for 2-class problem is 77.67% and the same for 3-class problem is 66.07%. Glove gave an accuracy of 77.33% for 2-class and 65.42% for 3-classproblem. The results are also compared with the existing open source word embedding and our system using Word2vec embedding is found to outperform better. This is a first attempt using chunking based approach for identification of Malayalam paraphrases.",
"title": ""
}
] |
scidocsrr
|
a3b97a7c122b6065d637951d9ce67691
|
Cross-scenario clothing retrieval and fine-grained style recognition
|
[
{
"docid": "c1f6052ecf802f1b4b2e9fd515d7ea15",
"text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.",
"title": ""
},
{
"docid": "ed5fad1aee50a98f16a6e6d2ced7fe2e",
"text": "We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5%) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.",
"title": ""
}
] |
[
{
"docid": "0acd46f97e516e5b6fc15a7716d4247b",
"text": "Proposing that the algorithms of social life are acquired as a domain-based process, the author offers distinctions between social domains preparing the individual for proximity-maintenance within a protective relationship (attachment domain), use and recognition of social dominance (hierarchical power domain), identification and maintenance of the lines dividing \"us\" and \"them\" (coalitional group domain), negotiation of matched benefits with functional equals (reciprocity domain), and selection and protection of access to sexual partners (mating domain). Flexibility in the implementation of domains occurs at 3 different levels: versatility at a bioecological level, variations in the cognitive representation of individual experience, and cultural and individual variations in the explicit management of social life. Empirical evidence for domain specificity was strongest for the attachment domain; supportive evidence was also found for the distinctiveness of the 4 other domains. Implications are considered at theoretical and applied levels.",
"title": ""
},
{
"docid": "eb2d29417686cc86a45c33694688801f",
"text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.",
"title": ""
},
{
"docid": "f773798785419625b8f283fc052d4ab2",
"text": "The increasing interest in energy storage for the grid can be attributed to multiple factors, including the capital costs of managing peak demands, the investments needed for grid reliability, and the integration of renewable energy sources. Although existing energy storage is dominated by pumped hydroelectric, there is the recognition that battery systems can offer a number of high-value opportunities, provided that lower costs can be obtained. The battery systems reviewed here include sodium-sulfur batteries that are commercially available for grid applications, redox-flow batteries that offer low cost, and lithium-ion batteries whose development for commercial electronics and electric vehicles is being applied to grid storage.",
"title": ""
},
{
"docid": "1feaf48291b7ea83d173b70c23a3b7c0",
"text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).",
"title": ""
},
{
"docid": "3c891452e416c5faa3da8b6e32a57b3f",
"text": "Linear support vector machines (svms) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel svms. We generalise this model to locally finite dimensional kernel svm.",
"title": ""
},
{
"docid": "04e4c1b80bcf1a93cafefa73563ea4d3",
"text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.",
"title": ""
},
{
"docid": "a693eeae7abe600c11da8d5dedabbcf9",
"text": "Objectives: This study was designed to investigate psychometric properties of the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE), and to examine correlations between its scores and measures of overall satisfaction with physicians, personal trust, and indicators of patient compliance. Methods: Research participants included 535 out-patients (between 18-75 years old, 66% female). A survey was mailed to participants which included the JSPPPE (5-item), a scale for measuring overall satisfaction with the primary care physician (10-item), and demographic questions. Patients were also asked about compliance with their physician’s recommendation for preventive tests (colonoscopy, mammogram, and PSA for age and gender appropriate patients). Results: Factor analysis of the JSPPPE resulted in one prominent component. Corrected item-total score correlations ranged from .88 to .94. Correlation between scores of the JSPPPE and scores on the patient satisfaction scale was 0.93. Scores of the JSPPPE were highly correlated with measures of physician-patient trust (r >.73). Higher scores of the JSPPPE were significantly associated with physicians’ recommendations for preventive tests (colonoscopy, mammogram, and PSA) and with compliance rates which were > .80). Cronbach’s coefficient alpha for the JSPPPE ranged from .97 to .99 for the total sample and for patients in different gender and age groups. Conclusions: Empirical evidence supported the psychometrics of the JSPPPE, and confirmed significant links with patients’ satisfaction with their physicians, interpersonal trust, and compliance with physicians’ recommendations. Availability of this psychometrically sound instrument will facilitate empirical research on empathy in patient care in different countries.",
"title": ""
},
{
"docid": "3d7fabdd5f56c683de20640abccafc44",
"text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.",
"title": ""
},
{
"docid": "720778ca4d6d8eb0fa78eecb1ebbb527",
"text": "Address spoofing attacks like ARP spoofing and DDoS attacks are mostly launched in a networking environment to degrade the performance. These attacks sometimes break down the network services before the administrator comes to know about the attack condition. Software Defined Networking (SDN) has emerged as a novel network architecture in which date plane is isolated from the control plane. Control plane is implemented at a central device called controller. But, SDN paradigm is not commonly used due to some constraints like budget, limited skills to control SDN, the flexibility of traditional protocols. To get SDN benefits in a traditional network, a limited number of SDN devices can be deployed among legacy devices. This technique is called hybrid SDN. In this paper, we propose a new approach to automatically detect the attack condition and mitigate that attack in hybrid SDN. We represent the network topology in the form of a graph. A graph based traversal mechanism is adopted to indicate the location of the attacker. Simulation results show that our approach enhances the network efficiency and improves the network security Keywords—Communication system security; Network Security; ARP Spoofing Introduction",
"title": ""
},
{
"docid": "27cc510f79a4ed76da42046b49bbb9fd",
"text": "This article reports the orthodontic treatment ofa 25-year-old female patient whose chief complaint was the inclination of the maxillary occlusal plane in front view. The individualized vertical placement of brackets is described. This placement made possible a symmetrical occlusal plane to be achieved in a rather straightforward manner without the need for further technical resources.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "304b4cee4006e87fc4172a3e9de88ed1",
"text": "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.",
"title": ""
},
{
"docid": "6a55a097f27609ad50e94f0947d0e72c",
"text": "This study develops an antenatal care information system to assist women during pregnancy. We designed and implemented the system as both a web-based service and a multi-platform application for smartphones and tablets. The proposed system has three novel features: (1) web-based maternity records, which contains concise explanations of various antenatal screening and diagnostic tests; (2) self-care journals, which allow pregnant women to keep track of their gestational weight gains, blood pressure, fetal movements, and contractions; and (3) health education, which automatically presents detailed information on antenatal care and other pregnancy-related knowledge according to the women's gestational age. A survey was conducted among pregnant women to evaluate the usability and acceptance of the proposed system. In order to prove that the antenatal care was effective, clinical outcomes should be provided and the results are focused on a usability evaluation.",
"title": ""
},
{
"docid": "6fab26c4c8fa05390aa03998a748f87d",
"text": "Click prediction is one of the fundamental problems in sponsored search. Most of existing studies took advantage of machine learning approaches to predict ad click for each event of ad view independently. However, as observed in the real-world sponsored search system, user’s behaviors on ads yield high dependency on how the user behaved along with the past time, especially in terms of what queries she submitted, what ads she clicked or ignored, and how long she spent on the landing pages of clicked ads, etc. Inspired by these observations, we introduce a novel framework based on Recurrent Neural Networks (RNN). Compared to traditional methods, this framework directly models the dependency on user’s sequential behaviors into the click prediction process through the recurrent structure in RNN. Large scale evaluations on the click-through logs from a commercial search engine demonstrate that our approach can significantly improve the click prediction accuracy, compared to sequence-independent approaches.",
"title": ""
},
{
"docid": "55dfc0e1fae2ca1fed295bc9aa270157",
"text": "The rapid development of driver fatigue detection technology indicates important significance of traffic safety. The authors' main goals of this Letter are principally three: (i) A middleware architecture, defined as process unit (PU), which can communicate with personal electroencephalography (EEG) node (PEN) and cloud server (CS). The PU receives EEG signals from PEN, recognises the fatigue state of the driver, and transfer this information to CS. The CS sends notification messages to the surrounding vehicles. (ii) An android application for fatigue detection is built. The application can be used for the driver to detect the state of his/her fatigue based on EEG signals, and warn neighbourhood vehicles. (iii) The detection algorithm for driver fatigue is applied based on fuzzy entropy. The idea of 10-fold cross-validation and support vector machine are used for classified calculation. Experimental results show that the average accurate rate of detecting driver fatigue is about 95%, which implying that the algorithm is validity in detecting state of driver fatigue.",
"title": ""
},
{
"docid": "269e2f8bca42d5369f9337aea6191795",
"text": "Today, exposure to new and unfamiliar environments is a necessary part of daily life. Effective communication of location-based information through location-based services has become a key concern for cartographers, geographers, human-computer interaction and professional designers alike. Recently, much attention was directed towards Augmented Reality (AR) interfaces. Current research, however, focuses primarily on computer vision and tracking, or investigates the needs of urban residents, already familiar with their environment. Adopting a user-centred design approach, this paper reports findings from an empirical mobile study investigating how tourists acquire knowledge about an unfamiliar urban environment through AR browsers. Qualitative and quantitative data was used in the development of a framework that shifts the perspective towards a more thorough understanding of the overall design space for such interfaces. The authors analysis provides a frame of reference for the design and evaluation of mobile AR interfaces. The authors demonstrate the application of the framework with respect to optimization of current design of AR.",
"title": ""
},
{
"docid": "fcd9a80d35a24c7222392c11d3376c72",
"text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.",
"title": ""
},
{
"docid": "37e936c375d34f356e195f844125ae84",
"text": "LEARNING OBJECTIVES\nThe reader is presumed to have a basic understanding of facial anatomy and facial rejuvenation procedures. After reading this article, the reader should also be able to: 1. Identify the essential anatomy of the face as it relates to facelift surgery. 2. Describe the common types of facelift procedures, including their strengths and weaknesses. 3. Apply appropriate preoperative and postoperative management for facelift patients. 4. Describe common adjunctive procedures. Physicians may earn 1.0 AMA PRA Category 1 Credit by successfully completing the examination based on material covered in this article. This activity should take one hour to complete. The examination begins on page 464. As a measure of the success of the education we hope you will receive from this article, we encourage you to log on to the Aesthetic Society website and take the preexamination before reading this article. Once you have completed the article, you may then take the examination again for CME credit. The Aesthetic Society will be able to compare your answers and use these data for future reference as we attempt to continually improve the CME articles we offer. ASAPS members can complete this CME examination online by logging on to the ASAPS members-only website (http://www.surgery.org/members) and clicking on \"Clinical Education\" in the menu bar. Modern aesthetic surgery of the face began in the first part of the 20th century in the United States and Europe. Initial limited excisions gradually progressed to skin undermining and eventually to a variety of methods for contouring the subcutaneous facial tissue. This particular review focuses on the cheek and neck. While the lid-cheek junction, eyelids, and brow must also be considered to obtain a harmonious appearance, those elements are outside the scope of this article. Overall patient management, including patient selection, preoperative preparation, postoperative care, and potential complications are discussed.",
"title": ""
},
{
"docid": "cc05dca89bf1e3f53cf7995e547ac238",
"text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.",
"title": ""
},
{
"docid": "77335856af8b62ae2e1fcd10654ed9a1",
"text": "Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.",
"title": ""
}
] |
scidocsrr
|
c8f59650002f716fa244065bdee10466
|
A Sarcasm Extraction Method Based on Patterns of Evaluation Expressions
|
[
{
"docid": "65b34f78e3b8d54ad75d32cdef487dac",
"text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.",
"title": ""
},
{
"docid": "b485b27da4b17469a5c519538f4dcf1b",
"text": "The research described in this work focuses on identifying key components for the task of irony detection. By means of analyzing a set of customer reviews, which are considered as ironic both in social and mass media, we try to find hints about how to deal with this task from a computational point of view. Our objective is to gather a set of discriminating elements to represent irony. In particular, the kind of irony expressed in such reviews. To this end, we built a freely available data set with ironic reviews collected from Amazon. Such reviews were posted on the basis of an online viral effect; i.e. contents whose effect triggers a chain reaction on people. The findings were assessed employing three classifiers. The results show interesting hints regarding the patterns and, especially, regarding the implications for sentiment analysis.",
"title": ""
}
] |
[
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "b5215ddc7768f75fe72cdaaad9e3cdb8",
"text": "Visual saliency analysis detects salient regions/objects that attract human attention in natural scenes. It has attracted intensive research in different fields such as computer vision, computer graphics, and multimedia. While many such computational models exist, the focused study of what and how applications can be beneficial is still lacking. In this article, our ultimate goal is thus to provide a comprehensive review of the applications using saliency cues, the so-called attentive systems. We would like to provide a broad vision about saliency applications and what visual saliency can do. We categorize the vast amount of applications into different areas such as computer vision, computer graphics, and multimedia. Intensively covering 200+ publications we survey (1) key application trends, (2) the role of visual saliency, and (3) the usability of saliency into different tasks.",
"title": ""
},
{
"docid": "2833dbe3c3e576a3ba8f175a755b6964",
"text": "The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.",
"title": ""
},
{
"docid": "2a76205b80c90ff9a4ca3ccb0434bb03",
"text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.",
"title": ""
},
{
"docid": "14724ca410a07d97857bf730624644a5",
"text": "We introduce a highly scalable approach for open-domain question answering with no dependence on any data set for surface form to logical form mapping or any linguistic analytic tool such as POS tagger or named entity recognizer. We define our approach under the Constrained Conditional Models framework which lets us scale up to a full knowledge graph with no limitation on the size. On a standard benchmark, we obtained near 4 percent improvement over the state-of-the-art in open-domain question answering task.",
"title": ""
},
{
"docid": "86f93e5facbcf5ac96ba68a8d91dda63",
"text": "Lawvere theories and monads have been the two main category theoretic formulations of universal algebra, Lawvere theories arising in 1963 and the connection with monads being established a few years later. Monads, although mathematically the less direct and less malleable formulation, rapidly gained precedence. A generation later, the definition of monad began to appear extensively in theoretical computer science in order to model computational effects, without reference to universal algebra. But since then, the relevance of universal algebra to computational effects has been recognised, leading to renewed prominence of the notion of Lawvere theory, now in a computational setting. This development has formed a major part of Gordon Plotkin’s mature work, and we study its history here, in particular asking why Lawvere theories were eclipsed by monads in the 1960’s, and how the renewed interest in them in a computer science setting might develop in future.",
"title": ""
},
{
"docid": "6224f4f3541e9cd340498e92a380ad3f",
"text": "A personal story: From philosophy to software.",
"title": ""
},
{
"docid": "5931169b6433d77496dfc638988399eb",
"text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.",
"title": ""
},
{
"docid": "58eebe0e55f038fea268b6a7a6960939",
"text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.",
"title": ""
},
{
"docid": "025932fa63b24d65f3b61e07864342b7",
"text": "The realization of the Internet of Things (IoT) paradigm relies on the implementation of systems of cooperative intelligent objects with key interoperability capabilities. One of these interoperability features concerns the cooperation among nodes towards a collaborative deployment of applications taking into account the available resources, such as electrical energy, memory, processing, and object capability to perform a given task, which are",
"title": ""
},
{
"docid": "c075c26fcfad81865c58a284013c0d33",
"text": "A novel pulse compression technique is developed that improves the axial resolution of an ultrasonic imaging system and provides a boost in the echo signal-to-noise ratio (eSNR). The new technique, called the resolution enhancement compression (REC) technique, was validated with simulations and experimental measurements. Image quality was examined in terms of three metrics: the cSNR, the bandwidth, and the axial resolution through the modulation transfer function (MTF). Simulations were conducted with a weakly-focused, single-element ultrasound source with a center frequency of 2.25 MHz. Experimental measurements were carried out with a single-element transducer (f/3) with a center frequency of 2.25 MHz from a planar reflector and wire targets. In simulations, axial resolution of the ultrasonic imaging system was almost doubled using the REC technique (0.29 mm) versus conventional pulsing techniques (0.60 mm). The -3 dB pulse/echo bandwidth was more than doubled from 48% to 97%, and maximum range sidelobes were -40 dB. Experimental measurements revealed an improvement in axial resolution using the REC technique (0.31 mm) versus conventional pulsing (0.44 mm). The -3 dB pulse/echo bandwidth was doubled from 56% to 113%, and maximum range sidelobes were observed at -45 dB. In addition, a significant gain in eSNR (9 to 16.2 dB) was achieved",
"title": ""
},
{
"docid": "405bae0d413aa4b5fef0ac8b8c639235",
"text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.",
"title": ""
},
{
"docid": "27b5e0594305a81c6fad15567ba1f3b9",
"text": "A novel approach to the design of series-fed antenna arrays has been presented, in which a modified three-way slot power divider is applied. In the proposed coupler, the power division is adjusted by changing the slot inclination with respect to the transmission line, whereas coupled transmission lines are perpendicular. The proposed modification reduces electrical length of the feeding line to <formula formulatype=\"inline\"><tex Notation=\"TeX\">$1 \\lambda$</tex></formula>, hence results in dissipation losses' reduction. The theoretical analysis and measurement results of the 2<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\, \\times \\,$</tex></formula>8 microstrip antenna array operating within 10.5-GHz frequency range are shown in the letter, proving the novel inclined-slot power divider's capability to provide appropriate power distribution and its potential application in the large antenna arrays.",
"title": ""
},
{
"docid": "491ad4b4ab179db2efd54f3149d08db5",
"text": "In robotics, Air Muscle is used as the analogy of the biological motor for locomotion or manipulation. It has advantages like the passive Damping, good power-weight ratio and usage in rough environments. An experimental test set up is designed to test both contraction and volume trapped in Air Muscle. This paper gives the characteristics of Air Muscle in terms of contraction of Air Muscle with variation of pressure at different loads and also in terms of volume of air trapped in it with variation in pressure at different loads. Braid structure of the Muscle has been described and its theoretical and experimental aspects of the characteristics of an Air Muscle are analysed.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "4f222d326bdbf006c3d8e54d2d97ba3f",
"text": "Designing autonomous vehicles for urban environments remains an unresolved problem. One major dilemma faced by autonomous cars is understanding the intention of other road users and communicating with them. To investigate one aspect of this, specifically pedestrian crossing behavior, we have collected a large dataset of pedestrian samples at crosswalks under various conditions (e.g., weather) and in different types of roads. Using the data, we analyzed pedestrian behavior from two different perspectives: the way they communicate with drivers prior to crossing and the factors that influence their behavior. Our study shows that changes in head orientation in the form of looking or glancing at the traffic is a strong indicator of crossing intention. We also found that context in the form of the properties of a crosswalk (e.g., its width), traffic dynamics (e.g., speed of the vehicles) as well as pedestrian demographics can alter pedestrian behavior after the initial intention of crossing has been displayed. Our findings suggest that the contextual elements can be interrelated, meaning that the presence of one factor may increase/decrease the influence of other factors. Overall, our work formulates the problem of pedestrian-driver interaction and sheds light on its complexity in typical traffic scenarios.",
"title": ""
},
{
"docid": "51e6db842735ae89419612bf831fce54",
"text": "In this work, we focus on automatically recognizing social conversational strategies that in human conversation contribute to building, maintaining or sometimes destroying a budding relationship. These conversational strategies include self-disclosure, reference to shared experience, praise and violation of social norms. By including rich contextual features drawn from verbal, visual and vocal modalities of the speaker and interlocutor in the current and previous turn, we can successfully recognize these dialog phenomena with an accuracy of over 80% and kappa ranging from 60-80%. Our findings have been successfully integrated into an end-to-end socially aware dialog system, with implications for virtual agents that can use rapport between user and system to improve task-oriented assistance.",
"title": ""
},
{
"docid": "71c7c98b55b2b2a9c475d4522310cfaa",
"text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.",
"title": ""
},
{
"docid": "f7f6f01e2858e03ae9a1313e0bb7b25f",
"text": "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 10 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time.",
"title": ""
},
{
"docid": "fe11fc1282a7efc34a9efe0e81fb21d6",
"text": "Increased complexity in modern embedded systems has presented various important challenges with regard to side-channel attacks. In particular, it is common to deploy SoC-based target devices with high clock frequencies in security-critical scenarios; understanding how such features align with techniques more often deployed against simpler devices is vital from both destructive (i.e., attack) and constructive (i.e., evaluation and/or countermeasure) perspectives. In this paper, we investigate electromagnetic-based leakage from three different means of executing cryptographic workloads (including the general purpose ARM core, an on-chip co-processor, and the NEON core) on the AM335x SoC. Our conclusion is that addressing challenges of the type above is feasible, and that key recovery attacks can be conducted with modest resources.",
"title": ""
}
] |
scidocsrr
|
c693172c8adb20fab73f1efd786dbf8e
|
Being with virtual others: Neural correlates of social interaction
|
[
{
"docid": "d6f322f4dd7daa9525f778ead18c8b5e",
"text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.",
"title": ""
}
] |
[
{
"docid": "2366ab0736d4d88cd61a578b9287f9f5",
"text": "Scientific curiosity and fascination have played a key role in human research with psychedelics along with the hope that perceptual alterations and heightened insight could benefit well-being and play a role in the treatment of various neuropsychiatric disorders. These motivations need to be tempered by a realistic assessment of the hurdles to be cleared for therapeutic use. Development of a psychedelic drug for treatment of a serious psychiatric disorder presents substantial although not insurmountable challenges. While the varied psychedelic agents described in this chapter share some properties, they have a range of pharmacologic effects that are reflected in the gradation in intensity of hallucinogenic effects from the classical agents to DMT, MDMA, ketamine, dextromethorphan and new drugs with activity in the serotonergic system. The common link seems to be serotonergic effects modulated by NMDA and other neurotransmitter effects. The range of hallucinogens suggest that they are distinct pharmacologic agents and will not be equally safe or effective in therapeutic targets. Newly synthesized specific and selective agents modeled on the legacy agents may be worth considering. Defining therapeutic targets that represent unmet medical need, addressing market and commercial issues, and finding treatment settings to safely test and use such drugs make the human testing of psychedelics not only interesting but also very challenging. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.",
"title": ""
},
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "6f2162f883fce56eaa6bd8d0fbcedc0b",
"text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.",
"title": ""
},
{
"docid": "dd6b50a56b740d07f3d02139d16eeec4",
"text": "Mitochondria play a central role in the aging process. Studies in model organisms have started to integrate mitochondrial effects on aging with the maintenance of protein homeostasis. These findings center on the mitochondrial unfolded protein response (UPR(mt)), which has been implicated in lifespan extension in worms, flies, and mice, suggesting a conserved role in the long-term maintenance of cellular homeostasis. Here, we review current knowledge of the UPR(mt) and discuss its integration with cellular pathways known to regulate lifespan. We highlight how insight into the UPR(mt) is revolutionizing our understanding of mitochondrial lifespan extension and of the aging process.",
"title": ""
},
{
"docid": "95b9bed09e52824f74dd81d4b0cfcff2",
"text": "Short circuit current and transient recovery voltage arising in power systems under fault conditions can develop thermal and dielectric failures in the system and may create severe damage to the critical components. Therefore, main devices in our power system especially like circuit breaker extremely need to be tested first. Testing can be done by two ways; direct testing, and synthetic testing. For testing high voltage circuit breakers, direct testing is not economical because of high power generating capability requirement of laboratory, high installation cost, and more space. Synthetic testing is an economical method for testing of high voltage circuit breakers. In synthetic test circuit, it is quite complex to choose the circuit components value for a desired transient recovery voltage (TRV) envelope. It is because, modification of any component value may cause change in all parameters of output waveform. This paper proposes a synthesis process to design synthetic test circuit to generate four-parameter transient recovery voltage (TRV) envelope for circuit breaker testing. A synthetic test circuit has been simulated in PSCAD to generate four-parameter TRV envelope for 145kV rating of circuit breaker.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "ab9416aaed78f3b1d6706ecd59c83db8",
"text": "The ArchiMate modelling language provides a coherent and a holistic view of an enterprise in terms of its products, services, business processes, actors, business units, software applications and more. Yet, ArchiMate currently lacks (1) expressivity in modelling an enterprise from a value exchange perspective, and (2) rigour and guidelines in modelling business processes that realize the transactions relevant from a value perspective. To address these issues, we show how to connect e $$^{3}$$ value, a technique for value modelling, to ArchiMate via transaction patterns from the DEMO methodology. Using ontology alignment techniques, we show a transformation between the meta models underlying e $$^{3}$$ value, DEMO and ArchiMate. Furthermore, we present a step-wise approach that shows how this model transformation is achieved and, in doing so, we also show the of such a transformation. We exemplify the transformation of DEMO and e $$^{3}$$ value into ArchiMate by means of a case study in the insurance industry. As a proof of concept, we present a software tool supporting our transformation approach. Finally, we discuss the functionalities and limitations of our approach; thereby, we analyze its and practical applicability.",
"title": ""
},
{
"docid": "815355c0a4322fa15af3a1112e56fc50",
"text": "People believe that depth plays an important role in success of deep neural networks (DNN). However, this belief lacks solid theoretical justifications as far as we know. We investigate role of depth from perspective of margin bound. In margin bound, expected error is upper bounded by empirical margin error plus Rademacher Average (RA) based capacity term. First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth. This indicates negative impact of depth on test performance. Second, we show that deeper networks tend to have larger representation power (measured by Betti numbers based complexity) than shallower networks in multi-class setting, and thus can lead to smaller empirical margin error. This implies positive impact of depth. The combination of these two results shows that for DNN with restricted number of hidden units, increasing depth is not always good since there is a tradeoff between positive and negative impacts. These results inspire us to seek alternative ways to achieve positive impact of depth, e.g., imposing margin-based penalty terms to cross entropy loss so as to reduce empirical margin error without increasing depth. Our experiments show that in this way, we achieve significantly better test performance.",
"title": ""
},
{
"docid": "aca04e624f1c3dcd3f0ab9f9be1ef384",
"text": "In this paper, a novel three-phase parallel grid-connected multilevel inverter topology with a novel switching strategy is proposed. This inverter is intended to feed a microgrid from renewable energy sources (RES) to overcome the problem of the polluted sinusoidal output in classical inverters and to reduce component count, particularly for generating a multilevel waveform with a large number of levels. The proposed power converter consists of <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula> two-level <inline-formula> <tex-math notation=\"LaTeX\">$(n+1)$</tex-math></inline-formula> phase inverters connected in parallel, where <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula> is the number of RES. The more the number of RES, the more the number of voltage levels, the more faithful is the output sinusoidal waveform. In the proposed topology, both voltage pulse width and height are modulated and precalculated by using a pulse width and height modulation so as to reduce the number of switching states (i.e., switching losses) and the total harmonic distortion. The topology is investigated through simulations and validated experimentally with a laboratory prototype. Compliance with the <inline-formula><tex-math notation=\"LaTeX\">$\\text{IEEE 519-1992}$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$\\text{IEC 61000-3-12}$</tex-math></inline-formula> standards is presented and an exhaustive comparison of the proposed topology is made against the classical cascaded H-bridge topology.",
"title": ""
},
{
"docid": "8ec871d495cf8d796654015896e2dcd2",
"text": "Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers’ actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology—traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach.",
"title": ""
},
{
"docid": "fac03559daded831095dfc9e083b794d",
"text": "Multi-label classification is prevalent in many real-world applications, where each example can be associated with a set of multiple labels simultaneously. The key challenge of multi-label classification comes from the large space of all possible label sets, which is exponential to the number of candidate labels. Most previous work focuses on exploiting correlations among different labels to facilitate the learning process. It is usually assumed that the label correlations are given beforehand or can be derived directly from data samples by counting their label co-occurrences. However, in many real-world multi-label classification tasks, the label correlations are not given and can be hard to learn directly from data samples within a moderate-sized training set. Heterogeneous information networks can provide abundant knowledge about relationships among different types of entities including data samples and class labels. In this paper, we propose to use heterogeneous information networks to facilitate the multi-label classification process. By mining the linkage structure of heterogeneous information networks, multiple types of relationships among different class labels and data samples can be extracted. Then we can use these relationships to effectively infer the correlations among different class labels in general, as well as the dependencies among the label sets of data examples inter-connected in the network. Empirical studies on real-world tasks demonstrate that the performance of multi-label classification can be effectively boosted using heterogeneous information net- works.",
"title": ""
},
{
"docid": "15881d5448e348c6e1a63e195daa68eb",
"text": "Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior reconstructed images and yields higher values of pixel-wise measures of reconstruction quality (PSNR and SSIM) compared to bottleneck autoencoders. In addition, we find that using alternative metrics that correlate better with human perception, such as feature perceptual loss and the classification accuracy, sparse image compression scores up to 18.06% and 2.7% higher, respectively, compared to bottleneck autoencoders. Although computationally much more intensive, we find that sparse coding is otherwise superior to bottleneck autoencoders for the same degree of compression.",
"title": ""
},
{
"docid": "c618caa277af7a0a64dd676bffab9cd3",
"text": "Theoretical and empirical research documents a negative relation between the cross-section of stock returns and individual skewness. Individual skewness has been de
ned with coskewness, industry groups, predictive models, and even with options skewness. However, measures of skewness computed only from stock returns, such as historical skewness, do not con
rm this negative relation. In this paper, we propose a model-free measure of individual stock skewness directly obtained from high-frequency intraday prices, which we call realized skewness. We hypothesize that realized skewness predicts future stock returns. To test this hypothesis, we sort stocks every week according to realized skewness, form
ve portfolios and analyze subsequent weekly returns. We
nd a negative relation between realized skewness and stock returns in the cross section. A trading strategy that buys stocks in the lowest realized skewness quintile and sells stocks in the highest realized skewness quintile generates an average raw return of 38 basis points per week with a t-statistic of 9.15. This result is robust to di¤erent market periods, portfolio weightings,
rm characteristics and is not explained by linear factor models. Comments are welcome. We both want to thank IFM for
nancial support. Any remaining inadequacies are ours alone. Correspondence to: Aurelio Vasquez, Faculty of Management, McGill University, 1001 Sherbrooke Street West, Montreal, Quebec, Canada, H3A 1G5; Tel: (514) 398-4000 x.00231; E-mail: Aurelio.Vasquez@mcgill.ca.",
"title": ""
},
{
"docid": "2708052c26111d54ba2c235afa26f71f",
"text": "Reinforcement Learning (RL) has been an interesting research area in Machine Learning and AI. Hierarchical Reinforcement Learning (HRL) that decomposes the RL problem into sub-problems where solving each of which will be more powerful than solving the entire problem will be our concern in this paper. A review of the state-of-the-art of HRL has been investigated. Different HRL-based domains have been highlighted. Different problems in such different domains along with some proposed solutions have been addressed. It has been observed that HRL has not yet been surveyed in the current existing research; the reason that motivated us to work on this paper. Concluding remarks are presented. Some ideas have been emerged during the work on this research and have been proposed for pursuing a future research.",
"title": ""
},
{
"docid": "9e3a7ae57f7faf984bdf8559e7e49850",
"text": "In the late 1960s Brazil was experiencing a boom in its television and record industries, as part of the so-called “Economic Miracle” (1968 74) brought about by the military dictatorship’s opening up of the market to international capital. Censorship was introduced more or less simultaneously and responded in part to the military’s recognition of the potential power of the audio-visual media in a country in which over half of the population was illiterate or semi-literate. After the 1964 coup and until the infamous 5 Institutional Act (AI-5), introduced in 1968 to silence opposition to the regime, the left wing cultural production that had characterised the period under the government of the deposed populist president, João Goulart, had continued to flourish. Until 1968, the military had largely left the cultural scene alone to face up to the failure of its revolutionary political and cultural projects. Instead the generals focused on the brutal repression of student, trade union and grassroots activists who had collaborated with the cultural left, thus effectively depriving these artists of their public. Chico Buarque, one of the most censored performers of the period, maintains that at this moment he was saved from retreating into an introspective formalism in his songs and musical dramas by the emergence in 1965 of the televised music festivals, which became one of the most talked about events in the country (Buarque, 1979, 48). Sponsored by the television stations, which were themselves closely monitored and regulated by the government, the festivals still provided oppositional songwriters with an opportunity to re-",
"title": ""
},
{
"docid": "87f3c12df54f395b9a24ccfc4dd10aa8",
"text": "The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "eae92d06d00d620791e6b247f8e63c36",
"text": "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.",
"title": ""
},
{
"docid": "34c1910dbd746368671b2b795114edfe",
"text": "Article history: Received: 4.7.2015. Received in revised form: 9.1.2016. Accepted: 29.1.2016. This paper presents a design of a distributed switched reluctance motor for an integrated motorfan system. Unlike a conventional compact motor structure, the rotor is distributed into the ends of the impeller blades. This distributed structure of motor makes more space for airflow to pass through so that the system efficiency is highly improved. Simultaneously, the distributed structure gives the motor a higher torque, better efficiency and heat dissipation. The paper first gives an initial design of a switched reluctance motor based on system structure constraints and output equations, then it predicts the machine performance and determines phase current and winding turns based on equivalent magnetic circuit analysis; finally it validates and refines the analytical design with 3D transient finite element analysis. It is found that the analytical performance prediction agrees well with finite element analysis results except for the weakness on core losses estimation. The results of the design shows that the distributed switched reluctance motor can produce a large torque of pretty high efficiency at specified speeds.",
"title": ""
},
{
"docid": "8a92594dbd75885002bad0dc2e658e10",
"text": "Exposure to some music, in particular classical music, has been reported to produce transient increases in cognitive performance. The authors investigated the effect of listening to an excerpt of Vivaldi's Four Seasons on category fluency in healthy older adult controls and Alzheimer's disease patients. In a counterbalanced repeated-measure design, participants completed two, 1-min category fluency tasks whilst listening to an excerpt of Vivaldi and two, 1-min category fluency tasks without music. The authors report a positive effect of music on category fluency, with performance in the music condition exceeding performance without music in both the healthy older adult control participants and the Alzheimer's disease patients. In keeping with previous reports, the authors conclude that music enhances attentional processes, and that this can be demonstrated in Alzheimer's disease.",
"title": ""
},
{
"docid": "51ecd734744b42a5fd770231d9e84785",
"text": "Within the last few years a lot of research has been done on large social and information networks. One of the principal challenges concerning complex networks is link prediction. Most link prediction algorithms are based on the underlying network structure in terms of traditional graph theory. In order to design efficient algorithms for large scale networks, researchers increasingly adapt methods from advanced matrix and tensor computations. This paper proposes a novel approach of link prediction for complex networks by means of multi-way tensors. In addition to structural data we furthermore consider temporal evolution of a network. Our approach applies the canonical Parafac decomposition to reduce tensor dimensionality and to retrieve latent trends. For the development and evaluation of our proposed link prediction algorithm we employed various popular datasets of online social networks like Facebook and Wikipedia. Our results show significant improvements for evolutionary networks in terms of prediction accuracy measured through mean average precision.",
"title": ""
}
] |
scidocsrr
|
a96fd40bc8fa60ddf253889bc2d2ab65
|
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding
|
[
{
"docid": "ea200dc100d77d8c156743bede4a965b",
"text": "We present a contextual spoken language understanding (contextual SLU) method using Recurrent Neural Networks (RNNs). Previous work has shown that context information, specifically the previously estimated domain assignment, is helpful for domain identification. We further show that other context information such as the previously estimated intent and slot labels are useful for both intent classification and slot filling tasks in SLU. We propose a step-n-gram model to extract sentence-level features from RNNs, which extract sequential features. The step-n-gram model is used together with a stack of Convolution Networks for training domain/intent classification. Our method therefore exploits possible correlations among domain/intent classification and slot filling and incorporates context information from the past predictions of domain/intent and slots. The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.",
"title": ""
}
] |
[
{
"docid": "82acc0bf0fc3860255c77af5e45a31a0",
"text": "We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.",
"title": ""
},
{
"docid": "e189f36ba0fcb91d0608d0651c60516e",
"text": "In this paper, we describe the progressive design of the gesture recognition module of an automated food journaling system -- Annapurna. Annapurna runs on a smartwatch and utilises data from the inertial sensors to first identify eating gestures, and then captures food images which are presented to the user in the form of a food journal. We detail the lessons we learnt from multiple in-the-wild studies, and show how eating recognizer is refined to tackle challenges such as (i) high gestural diversity, and (ii) non-eating activities with similar gestural signatures. Annapurna is finally robust (identifying eating across a wide diversity in food content, eating styles and environments) and accurate (false-positive and false-negative rates of 6.5% and 3.3% respectively)",
"title": ""
},
{
"docid": "2e7a88fb1eef478393a99366ff7089c8",
"text": "Asbestos has been described as a physical carcinogen in that long thin fibers are generally more carcinogenic than shorter thicker ones. It has been hypothesized that long thin fibers disrupt chromosome behavior during mitosis, causing chromosome abnormalities which lead to cell transformation and neoplastic progression. Using high-resolution time lapse video-enhanced light microscopy and the uniquely suited lung epithelial cells of the newt Taricha granulosa, we have characterized for the first time the behavior of crocidolite asbestos fibers, and their interactions with chromosomes, during mitosis in living cells. We found that the keratin cage surrounding the mitotic spindle inhibited fiber migration, resulting in spindles with few fibers. As in interphase, fibers displayed microtubule-mediated saltatory movements. Fiber position was only slightly affected by the ejection forces of the spindle asters. Physical interactions between crocidolite fibers and chromosomes occurred randomly within the spindle and along its edge. Crocidolite fibers showed no affinity toward chromatin and most encounters ended with the fiber passively yielding to the chromosome. In a few encounters along the spindle edge the chromosome yielded to the fiber, which remained stationary as if anchored to the keratin cage. We suggest that fibers thin enough to be caught in the keratin cage and long enough to protrude into the spindle are those fibers with the ability to snag or block moving chromosomes.",
"title": ""
},
{
"docid": "09c5fdbd76b7e81ef95c8edcc367bce7",
"text": "Convolution Neural Networks (CNN), known as ConvNets are widely used in many visual imagery application, object classification, speech recognition. After the implementation and demonstration of the deep convolution neural network in Imagenet classification in 2012 by krizhevsky, the architecture of deep Convolution Neural Network is attracted many researchers. This has led to the major development in Deep learning frameworks such as Tensorflow, caffe, keras, theno. Though the implementation of deep learning is quite possible by employing deep learning frameworks, mathematical theory and concepts are harder to understand for new learners and practitioners. This article is intended to provide an overview of ConvNets architecture and to explain the mathematical theory behind it including activation function, loss function, feedforward and backward propagation. In this article, grey scale image is taken as input information image, ReLU and Sigmoid activation function are considered for developing the architecture and cross-entropy loss function is used for computing the difference between predicted value and actual value. The architecture is developed in such a way that it can contain one convolution layer, one pooling layer, and multiple dense layers.",
"title": ""
},
{
"docid": "8c0c7d6554f21b4cb5e155cf1e33a165",
"text": "Despite progress, early childhood development (ECD) remains a neglected issue, particularly in resource-poor countries. We analyse the challenges and opportunities that ECD proponents face in advancing global priority for the issue. We triangulated among several data sources, including 19 semi-structured interviews with individuals involved in global ECD leadership, practice, and advocacy, as well as peer-reviewed research, organisation reports, and grey literature. We undertook a thematic analysis of the collected data, drawing on social science scholarship on collective action and a policy framework that elucidates why some global initiatives are more successful in generating political priority than others. The analysis indicates that the ECD community faces two primary challenges in advancing global political priority. The first pertains to framing: generation of internal consensus on the definition of the problem and solutions, agreement that could facilitate the discovery of a public positioning of the issue that could generate political support. The second concerns governance: building of effective institutions to achieve collective goals. However, there are multiple opportunities to advance political priority for ECD, including an increasingly favourable political environment, advances in ECD metrics, and the existence of compelling arguments for investment in ECD. To advance global priority for ECD, proponents will need to surmount the framing and governance challenges and leverage these opportunities.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "7b526ab92e31c2677fd20022a8b46189",
"text": "Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.",
"title": ""
},
{
"docid": "cd0d425c8315a22ed9e52b8bdd489b52",
"text": "Data mining is an essential phase in knowledge discovery in database which is actually used to extract hidden patterns from large databases. Data mining concepts and methods can be applied in various fields like marketing, medicine, real estate, customer relationship management, engineering, web mining, etc. The main objective of this paper is to compare the performance accuracy of Multilayer perceptron (MLP) Artificial Neural Network and ID3 (Iterative Dichotomiser 3), C4.5 (also known as J48) Decision Trees algorithms Weka data mining software in predicting Typhoid fever. The data used is the patient’s dataset collected from a well known Nigerian Hospital. ID3, C4.5 Decision tree and MLP Artificial Neural Network WEKA Data mining software was used for the implementation. The data collected were transformed in a form that is acceptable to the data mining software and it was splitted into two sets: The training dataset and the testing dataset so that it can be imported into the system. The training set was used to enable the system to observe relationships between input data and the resulting outcomes in order to perform the prediction. The testing dataset contains data used to test the performance of the model. This model can be used by medical experts both in the private and public hospitals to make more timely and consistent diagnosis of typhoid fever cases which will reduce death rate in our country. The MLP ANN model exhibits good performance in the prediction of typhoid fever disease in general because of the low values generated in the Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) and Relative Absolute Error (RAE) error performance measures. KeywordsID3, C4.5 , MLP, Decision Tree Artificial Neural Network, Typhoid fever African Journal of Computing & ICT Reference Format: O..O. Adeyemo, T. .O Adeyeye & D. Ogunbiyi (2015). Ccomparative Study of ID3/C4.5 Decision tree and Multilayer Perceptron Algorithms for the Prediction of Typhoid Fever. Afr J. of Comp & ICTs. Vol 8, No. 1. Pp 103-112.",
"title": ""
},
{
"docid": "2ea12a279b2a059399dcc62db2957ce5",
"text": "Alkaline pretreatment with NaOH under mild operating conditions was used to improve ethanol and biogas production from softwood spruce and hardwood birch. The pretreatments were carried out at different temperatures between minus 15 and 100oC with 7.0% w/w NaOH solution for 2 h. The pretreated materials were then enzymatically hydrolyzed and subsequently fermented to ethanol or anaerobically digested to biogas. In general, the pretreatment was more successful for both ethanol and biogas production from the hardwood birch than the softwood spruce. The pretreatment resulted in significant reduction of hemicellulose and the crystallinity of cellulose, which might be responsible for improved enzymatic hydrolyses of birch from 6.9% to 82.3% and spruce from 14.1% to 35.7%. These results were obtained with pretreatment at 100°C for birch and 5°C for spruce. Subsequently, the best ethanol yield obtained was 0.08 g/g of the spruce while pretreated at 100°C, and 0.17 g/g of the birch treated at 100°C. On the other hand, digestion of untreated birch and spruce resulted in methane yields of 250 and 30 l/kg VS of the wood species, respectively. The pretreatment of the wood species at the best conditions for enzymatic hydrolysis resulted in 83% and 74% improvement in methane production from birch and spruce.",
"title": ""
},
{
"docid": "d0b29493c64e787ed88ad8166d691c3d",
"text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.",
"title": ""
},
{
"docid": "befc74d8dc478a67c009894c3ef963d3",
"text": "In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks.",
"title": ""
},
{
"docid": "d4b6be1c4d8dd37b71bf536441449ad5",
"text": "Why should wait for some days to get or receive the distributed computing fundamentals simulations and advanced topics book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This distributed computing fundamentals simulations and advanced topics is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "9df0df8eb4f71d8c6952e07a179b2ec4",
"text": "In interpersonal interactions, speech and body gesture channels are internally coordinated towards conveying communicative intentions. The speech-gesture relationship is influenced by the internal emotion state underlying the communication. In this paper, we focus on uncovering the emotional effect on the interrelation between speech and body gestures. We investigate acoustic features describing speech prosody (pitch and energy) and vocal tract configuration (MFCCs), as well as three types of body gestures, viz., head motion, lower and upper body motions. We employ mutual information to measure the coordination between the two communicative channels, and analyze the quantified speech-gesture link with respect to distinct levels of emotion attributes, i.e., activation and valence. The results reveal that the speech-gesture coupling is generally tighter for low-level activation and high-level valence, compared to high-level activation and low-level valence. We further propose a framework for modeling the dynamics of speech-gesture interaction. Experimental studies suggest that such quantified coupling representations can well discriminate different levels of activation and valence, reinforcing that emotions are encoded in the dynamics of the multimodal link. We also verify that the structures of the coupling representations are emotiondependent using subspace-based analysis.",
"title": ""
},
{
"docid": "97b7065942b53f2d873c80f32242cd00",
"text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.",
"title": ""
},
{
"docid": "350d1717a5192873ef9e0ac9ed3efc7b",
"text": "OBJECTIVE\nTo describe the effects of percutaneously implanted valve-in-valve in the tricuspid position for patients with pre-existing transvalvular device leads.\n\n\nMETHODS\nIn this case series, we describe implantation of the Melody valve and SAPIEN XT valve within dysfunctional bioprosthetic tricuspid valves in three patients with transvalvular device leads.\n\n\nRESULTS\nIn all cases, the valve was successfully deployed and device lead function remained unchanged. In 1/3 cases with 6-month follow-up, device lead parameters remain unchanged and transcatheter valve-in-valve function remains satisfactory.\n\n\nCONCLUSIONS\nTranscatheter tricuspid valve-in-valve is feasible in patients with pre-existing transvalvular devices leads. Further study is required to determine the long-term clinical implications of this treatment approach.",
"title": ""
},
{
"docid": "fa22819c73c9f9cd2d0ee243a7450e76",
"text": "This dissertation describes a simulated autonomous car capable of driving on urbanstyle roads. The system is built around TORCS, an open source racing car simulator. Two real-time solutions are implemented; a reactive prototype using a neural network and a more complex deliberative approach using a sense, plan, act architecture. The deliberative system uses vision data fused with simulated laser range data to reliably detect road markings. The detected road markings are then used to plan a parabolic path and compute a safe speed for the vehicle. The vehicle uses a simulated global positioning/inertial measurement sensor to guide it along the desired path with the throttle, brakes, and steering being controlled using proportional controllers. The vehicle is able to reliably navigate the test track maintaining a safe road position at speeds of up to 40km/h.",
"title": ""
},
{
"docid": "b25b7100c035ad2953fb43087ede1625",
"text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.",
"title": ""
},
{
"docid": "9664431f0cfc22567e1e5c945f898595",
"text": "Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.",
"title": ""
},
{
"docid": "281b0a108c1e8507f26381cc905ce9d1",
"text": "Extraction–Transform–Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.",
"title": ""
},
{
"docid": "a7b8986dbfde4a7ccc3a4ad6e07319a7",
"text": "This article tests expectations generated by the veto players theory with respect to the over time composition of budgets in a multidimensional policy space. The theory predicts that countries with many veto players (i.e., coalition governments, bicameral political systems, presidents with veto) will have difficulty altering the budget structures. In addition, countries that tend to make significant shifts in government composition will have commensurate modifications of the budget. Data collected from 19 advanced industrialized countries from 1973 to 1995 confirm these expectations, even when one introduces socioeconomic controls for budget adjustments like unemployment variations, size of retired population and types of government (minimum winning coalitions, minority or oversized governments). The methodological innovation of the article is the use of empirical indicators to operationalize the multidimensional policy spaces underlying the structure of budgets. The results are consistent with other analyses of macroeconomic outcomes like inflation, budget deficits and taxation that are changed at a slower pace by multiparty governments. The purpose of this article is to test empirically the expectations of the veto players theory in a multidimensional setting. The theory defines ‘veto players’ as individuals or institutions whose agreement is required for a change of the status quo. The basic prediction of the theory is that when the number of veto players and their ideological distances increase, policy stability also increases (only small departures from the status quo are possible) (Tsebelis 1995, 1999, 2000, 2002). The theory was designed for the study of unidimensional and multidimensional policy spaces. While no policy domain is strictly unidimensional, existing empirical tests have only focused on analyzing political economy issues in a single dimension. These studies have confirmed the veto players theory’s expectations (see Bawn (1999) on budgets; Hallerberg & Basinger (1998) on taxes; Tsebelis (1999) on labor legislation; Treisman (2000) on inflation; Franzese (1999) on budget deficits). This article is the first attempt to test whether the predictions of the veto players theory hold in multidimensional policy spaces. We will study a phenomenon that cannot be considered unidimensional: the ‘structure’ of budgets – that is, their percentage composition, and the change in this composition over © European Consortium for Political Research 2004 Published by Blackwell Publishing Ltd., 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA",
"title": ""
}
] |
scidocsrr
|
c210f30c1e3255ffe2487adf19bfd6b0
|
ICDAR 2003 robust reading competitions: entries, results, and future directions
|
[
{
"docid": "f3d86ca456bb9e97b090ea68a82be93b",
"text": "Many images—especially those used for page design on web pages—as well as videos contain visible text. If these text occurrences could be detected, segmented, and recognized automatically, they would be a valuable source of high-level semantics for indexing and retrieval. In this paper, we propose a novel method for localizing and segmenting text in complex images and videos. Text lines are identified by using a complex-valued multilayer feed-forward network trained to detect text at a fixed scale and position. The network’s output at all scales and positions is integrated into a single text-saliency map, serving as a starting point for candidate text lines. In the case of video, these candidate text lines are refined by exploiting the temporal redundancy of text in video. Localized text lines are then scaled to a fixed height of 100 pixels and segmented into a binary image with black characters on white background. For videos, temporal redundancy is exploited to improve segmentation performance. Input images and videos can be of any size due to a true multiresolution approach. Moreover, the system is not only able to locate and segment text occurrences into large binary images, but is also able to track each text line with sub-pixel accuracy over the entire occurrence in a video, so that one text bitmap is created for all instances of that text line. Therefore, our text segmentation results can also be used for object-based video encoding such as that enabled by MPEG-4.",
"title": ""
}
] |
[
{
"docid": "dddec8d72a4ed68ee47c0cc7f4f31dbd",
"text": "Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. In this tutorial we introduce a novel non-Bayesian approach, called Additive Regularization of Topic Models. ARTM is free of redundant probabilistic assumptions and provides a simple inference for many combined and multi-objective topic models.",
"title": ""
},
{
"docid": "8775af6029924a390cfb51aa17f99a2a",
"text": "Machine learning is increasingly used to make sense of the physical world yet may suffer from adversarial manipulation. We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even when they are printed and then photographed.",
"title": ""
},
{
"docid": "44a84af55421c88347034d6dc14e4e30",
"text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.",
"title": ""
},
{
"docid": "e19d53b7ebccb3a1354bb6411182b1d3",
"text": "ERP implementation projects affect large parts of an implementing organization and lead to changes in the way an organization performs its tasks. The costs needed for the effort to implement these systems are hard to estimate. Research indicates that the size of an ERP project can be a useful measurement for predicting the effort required to complete an ERP implementation project. However, such a metric does not yet exist. Therefore research should be carried out to find a set of variables which can define the size of an ERP project. This paper describes a first step in such a project. It shows 21 logical clusters of ERP implementation project activities based on 405 ERP implementation project activities retrieved from literature. Logical clusters of ERP project activities can be used in further research to find variables for defining the size of an ERP project. IntroductIon Globalization has put pressure on organizations to perform as efficiently and effectively as possible in order to compete in the market. Structuring their internal processes and making them most efficient by integrated information systems is very important for that reason. In the 1990s, organizations started implementing ERP systems in order to replace their legacy systems and improve their business processes. This change is still being implemented. ERP is a key ingredient for gaining competitive advantage, streamlining operations, and having “lean” manufacturing (Mabert, Soni, & Venkataramanan, 2003). A study of Hendricks indicates that research shows some evidence of improvements in profitability after implementing ERP systems (Hendricks, Singhal, & Stratman, 2006). Forecasters predict a growth in the ERP market. 1847 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 1848 Sizing ERP Implementation Projects Several researchers also indicate that much research is still being carried out in this area ( Møller, Kræmmergaard, & Rikhardsson, 2004; Botta-Genoulaz, Millet, & Grabot, 2005). Although the research area is rather clearly defined, many topics still have to be researched and the usefulness of results for actual projects has to be designed. ERP projects are large and risky projects for organizations, because they affect great parts of the implementing organization and lead to changes in the way the organization performs its tasks. The costs needed for the effort to implement these systems are usually very high and also very hard to estimate. Many cases are documented where the actual required time and costs exceeded the budget, that is to say the estimated costs, many times. There are even cases where ERP implementation projects led to bankruptcy (Holland & Light, 1999; Scott, 1999). Francalanci states that software costs only represent a fraction of the overall cost of ERP projects within the total costs of the implementation project, that is to say, less than 10% over a 5-year period (Francalanci, 2001). In addition, Willis states that consultants alone can cost as much as or more than five times the cost of the software (Willis, Willis-Brown, & McMillan, 2001). This is confirmed by von Arb, who indicates that consultancy costs can be 2 to 4 times as much as software license costs (Arb, 1997). This indicates that the effort required for implementing an ERP system largely consists of effort-related costs. Von Arb also argues that license and hardware costs are fairly constant and predictable and that only a focus on reducing these effort-related costs is realistic. The conclusion is legitimate that the total effort is the most important and difficult factor to estimate in an ERP implementation project. Therefore, the main research of the authors only focuses on the estimation of the total effort required for implementing an ERP system. In every project there is a great uncertainty at the start, while at the end there is only a minor uncertainty (Meredith & Mantel, 2003). In the planning phase, the most important decisions are made that will affect the future of the organization as a whole. As described earlier, a failure to implement an ERP system can seriously affect the health of an organization and even lead to bankruptcy. This means that it would be of great help if a method would exist that could predict the effort required for implementing the ERP system within reasonable boundaries. The method should not be too complex and should be quick. Its outcomes should support the rough estimation of the project and serve as a starting point for the detailed planning in the set-up phase of the project phase and for the first allocation of the resources. Moreover, if conditions greatly change during a project, the method could be used to estimate the consequences for the remaining effort required for implementing the ERP system. The aim of this article is to answer which activities exist in ERP projects according to literature and how these can be clustered as a basis for defining the size of an ERP project. In the article, the approach and main goal of our research will first be described, followed by a literature review on ERP project activities. After that, we will present the clustering approach and results followed by conclusions and discussion.",
"title": ""
},
{
"docid": "b11a161588bd1a3d4d7cd78ecce4aa64",
"text": "This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up a VE into a configuration task, and hence reducing the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.",
"title": ""
},
{
"docid": "691cdea5cf3fae2713c721c1cfa8c132",
"text": "of the Dissertation Addressing the Challenges of Underspecification in Web Search",
"title": ""
},
{
"docid": "d40ac2e9a896e13ece11d7429fab3d80",
"text": "We present our recent work (ICS 2011) on dynamic environments in which computational nodes, or decision makers, follow simple and unsophisticated rules of behavior (e.g., repeatedly \"best replying\" to others' actions, and minimizing \"regret\") that have been extensively studied in game theory and economics. We aim to understand when convergence of the resulting dynamics to an equilibrium point is guaranteed if nodes' interaction is not synchronized (e.g., as in Internet protocols and large-scale markets). We take the first steps of this research agenda. We exhibit a general non-convergence result and consider its implications across a wide variety of interesting and timely applications: routing, congestion control, game theory, social networks and circuit design. We also consider the relationship between classical nontermination results in distributed computing theory and our result, explore the impact of scheduling on convergence, study the computational and communication complexity of asynchronous dynamics and present some basic observations regarding the effects of asynchrony on no-regret dynamics.",
"title": ""
},
{
"docid": "043306203de8365bd1930a9c0b4138c7",
"text": "In this paper, we compare two different methods for automatic Arabic speech recognition for isolated words and sentences. Isolated word/sentence recognition was performed using cepstral feature extraction by linear predictive coding, as well as Hidden Markov Models (HMM) for pattern training and classification. We implemented a new pattern classification method, where we used Neural Networks trained using the Al-Alaoui Algorithm. This new method gave comparable results to the already implemented HMM method for the recognition of words, and it has overcome HMM in the recognition of sentences. The speech recognition system implemented is part of the Teaching and Learning Using Information Technology (TLIT) project which would implement a set of reading lessons to assist adult illiterates in developing better reading capabilities.",
"title": ""
},
{
"docid": "a7f046dcc5e15ccfbe748fa2af400c98",
"text": "INTRODUCTION\nSmoking and alcohol use (beyond social norms) by health sciences students are behaviors contradictory to the social function they will perform as health promoters in their eventual professions.\n\n\nOBJECTIVES\nIdentify prevalence of tobacco and alcohol use in health sciences students in Mexico and Cuba, in order to support educational interventions to promote healthy lifestyles and development of professional competencies to help reduce the harmful impact of these legal drugs in both countries.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted using quantitative and qualitative techniques. Data were collected from health sciences students on a voluntary basis in both countries using the same anonymous self-administered questionnaire, followed by an in-depth interview.\n\n\nRESULTS\nPrevalence of tobacco use was 56.4% among Mexican students and 37% among Cuban. It was higher among men in both cases, but substantial levels were observed in women as well. The majority of both groups were regularly exposed to environmental tobacco smoke. Prevalence of alcohol use was 76.9% in Mexican students, among whom 44.4% were classified as at-risk users. Prevalence of alcohol use in Cuban students was 74.1%, with 3.7% classified as at risk.\n\n\nCONCLUSIONS\nThe high prevalence of tobacco and alcohol use in these health sciences students is cause for concern, with consequences not only for their individual health, but also for their professional effectiveness in helping reduce these drugs' impact in both countries.",
"title": ""
},
{
"docid": "c5731d7290f1ab073c12bf67101a386a",
"text": "Convolutional neural networks have emerged as the leading method for the classification and segmentation of images. In some cases, it is desirable to focus the attention of the net on a specific region in the image; one such case is the recognition of the contents of transparent vessels, where the vessel region in the image is already known. This work presents a valve filter approach for focusing the attention of the net on a region of interest (ROI). In this approach, the ROI is inserted into the net as a binary map. The net uses a different set of convolution filters for the ROI and background image regions, resulting in a different set of features being extracted from each region. More accurately, for each filter used on the image, a corresponding valve filter exists that acts on the ROI map and determines the regions in which the corresponding image filter will be used. This valve filter effectively acts as a valve that inhibits specific features in different image regions according to the ROI map. In addition, a new data set for images of materials in glassware vessels in a chemistry laboratory setting is presented. This data set contains a thousand images with pixel-wise annotation according to categories ranging from filled and empty to the exact phase of the material inside the vessel. The results of the valve filter approach and fully convolutional neural nets (FCN) with no ROI input are compared based on this data set.",
"title": ""
},
{
"docid": "e1e1fcc7a732e5b2835c5a137722b3ee",
"text": "Regular expression matching is a crucial task in several networking applications. Current implementations are based on one of two types of finite state machines. Non-deterministic finite automata (NFAs) have minimal storage demand but have high memory bandwidth requirements. Deterministic finite automata (DFAs) exhibit low and deterministic memory bandwidth requirements at the cost of increased memory space. It has already been shown how the presence of wildcards and repetitions of large character classes can render DFAs and NFAs impractical. Additionally, recent security-oriented rule-sets include patterns with advanced features, namely back-references, which add to the expressive power of traditional regular expressions and cannot therefore be supported through classical finite automata.\n In this work, we propose and evaluate an extended finite automaton designed to address these shortcomings. First, the automaton provides an alternative approach to handle character repetitions that limits memory space and bandwidth requirements. Second, it supports back-references without the need for back-tracking in the input string. In our discussion of this proposal, we address practical implementation issues and evaluate the automaton on real-world rule-sets. To our knowledge, this is the first high-speed automaton that can accommodate all the Perl-compatible regular expressions present in the Snort network intrusion and detection system.",
"title": ""
},
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3300e4e29d160fb28861ac58740834b5",
"text": "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene/P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0% and 83.8% prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene/P.",
"title": ""
},
{
"docid": "807b1a6a389788d598c5c0ec11b336ab",
"text": "One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. handengineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "42cf4bd800000aed5e0599cba52ba317",
"text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.",
"title": ""
},
{
"docid": "aefa758e6b5681c213150ed674eae915",
"text": "This paper presents a solution to automatically recognize the correct left/right and upright/upside-down orientation of iris images. This solution can be used to counter spoofing attacks directed to generate fake identities by rotating an iris image or the iris sensor during the acquisition. Two approaches are compared on the same data, using the same evaluation protocol: 1) feature engineering, using hand-crafted features classified by a support vector machine (SVM) and 2) feature learning, using data-driven features learned and classified by a convolutional neural network (CNN). A data set of 20 750 iris images, acquired for 103 subjects using four sensors, was used for development. An additional subject-disjoint data set of 1,939 images, from 32 additional subjects, was used for testing purposes. Both same-sensor and cross-sensor tests were carried out to investigate how the classification approaches generalize to unknown hardware. The SVM-based approach achieved an average correct classification rate above 95% (89%) for recognition of left/right (upright/upside-down) orientation when tested on subject-disjoint data and camera-disjoint data, and 99% (97%) if the images were acquired by the same sensor. The CNN-based approach performed better for same-sensor experiments, and presented slightly worse generalization capabilities to unknown sensors when compared with the SVM. We are not aware of any other papers on the automatic recognition of upright/upside-down orientation of iris images, or studying both hand-crafted and data-driven features in same-sensor and cross-sensor subject-disjoint experiments. The data sets used in this paper, along with random splits of the data used in cross-validation, are being made available.",
"title": ""
},
{
"docid": "26db4ecbc2ad4b8db0805b06b55fe27d",
"text": "The advent of high voltage (HV) wide band-gap power semiconductor devices has enabled the medium voltage (MV) grid tied operation of non-cascaded neutral point clamped (NPC) converters. This results in increased power density, efficiency as well as lesser control complexity. The multi-chip 15 kV/40 A SiC IGBT and 15 kV/20 A SiC MOSFET are two such devices which have gained attention for MV grid interface applications. Such converters based on these devices find application in active power filters, STATCOM or as active front end converters for solid state transformers. This paper presents an experimental comparative evaluation of these two SiC devices for 3-phase grid connected applications using a 3-level NPC converter as reference. The IGBTs are generally used for high power applications due to their lower conduction loss while MOSFETs are used for high frequency applications due to their lower switching loss. The thermal performance of these devices are compared based on device loss characteristics, device heat-run tests, 3-level pole heat-run tests, PLECS thermal simulation based loss comparison and MV experiments on developed hardware prototypes. The impact of switching frequency on the harmonic control of the grid connected converter is also discussed and suitable device is selected for better grid current THD.",
"title": ""
},
{
"docid": "d9160f2cc337de729af34562d77a042e",
"text": "Ontologies proliferate with the progress of the Semantic Web. Ontology matching is an important way of establishing interoperability between (Semantic) Web applications that use different but related ontologies. Due to their sizes and monolithic nature, large ontologies regarding real world domains bring a new challenge to the state of the art ontology matching technology. In this paper, we propose a divide-and-conquer approach to matching large ontologies. We develop a structure-based partitioning algorithm, which partitions entities of each ontology into a set of small clusters and constructs blocks by assigning RDF Sentences to those clusters. Then, the blocks from different ontologies are matched based on precalculated anchors, and the block mappings holding high similarities are selected. Finally, two powerful matchers, V-DOC and GMO, are employed to discover alignments in the block mappings. Comprehensive evaluation on both synthetic and real world data sets demonstrates that our approach both solves the scalability problem and achieves good precision and recall with significant reduction of execution time. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4cff5279110ff2e45060f3ccec7d51ba",
"text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)",
"title": ""
}
] |
scidocsrr
|
9075dc1f6297ae56988ab18f77b78e9f
|
Activity Recognition using Actigraph Sensor
|
[
{
"docid": "d62bded822aff38333a212ed1853b53c",
"text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated",
"title": ""
}
] |
[
{
"docid": "ca8b1080c8e1d6d234d12370f47d7874",
"text": "Alcelaphine herpesvirus-1 (AlHV-1), a causative agent of malignant catarrhal fever in cattle, was detected in wildebeest (Connochaetes taurinus) placenta tissue for the first time. Although viral load was low, the finding of viral DNA in over 50% of 94 samples tested lends support to the possibility that placental tissue could play a role in disease transmission and that wildebeest calves are infected in utero. Two viral loci were sequenced to examine variation among virus samples obtained from wildebeest and cattle: the ORF50 gene, encoding the lytic cycle transactivator protein, and the A9.5 gene, encoding a novel polymorphic viral glycoprotein. ORF50 was well conserved with six newly discovered alleles differing at only one or two base positions. In contrast, while only three new A9.5 alleles were discovered, these differed by up to 13% at the nucleotide level and up to 20% at the amino acid level. Structural homology searching performed with the additional A9.5 sequences determined in this study adds power to recent analysis identifying the four-helix bundle cytokine interleukin-4 (IL4) as the major homologue. The majority of MCF virus samples obtained from Tanzanian cattle and wildebeest encoded A9.5 polypeptides identical to the previously characterized A9.5 allele present in the laboratory maintained AlHV-1 C500 strain. This supports the view that AlHV-1 C500 is suitable for the development of a vaccine for wildebeest-associated MCF.",
"title": ""
},
{
"docid": "2a487ff4b9218900e9a0e480c23e4c25",
"text": "5.1 CONVENTIONAL ACTUATORS, SHAPE MEMORY ALLOYS, AND ELECTRORHEOLOGICAL FLUIDS ............................................................................................................................................................. 1 5.1.",
"title": ""
},
{
"docid": "6da632d61dbda324da5f74b38f25b1b9",
"text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.",
"title": ""
},
{
"docid": "e1fb515f0f5bbec346098f1ee2aaefdc",
"text": "Observing failures and other – desired or undesired – behavior patterns in large scale software systems of specific domains (telecommunication systems, information systems, online web applications, etc.) is difficult. Very often, it is only possible by examining the runtime behavior of these systems through operational logs or traces. However, these systems can generate data in order of gigabytes every day, which makes a challenge to process in the course of predicting upcoming critical problems or identifying relevant behavior patterns. We can say that there is a gap between the amount of information we have and the amount of information we need to make a decision. Low level data has to be processed, correlated and synthesized in order to create high level, decision helping data. The actual value of this high level data lays in its availability at the time of decision making (e.g., do we face a virus attack?). In other words high level data has to be available real-time or near real-time. The research area of event processing deals with processing such data that are viewed as events and with making alerts to the administrators (users) of the systems about relevant behavior patterns based on the rules that are determined in advance. The rules or patterns describe the typical circumstances of the events which have been experienced by the administrators. Normally, these experts improve their observation capabilities over time as they experience more and more critical events and the circumstances preceding them. However, there is a way to aid this manual process by applying the results from a related (and from many aspects, overlapping) research area, predictive analytics, and thus improving the effectiveness of event processing. Predictive analytics deals with the prediction of future events based on previously observed historical data by applying sophisticated methods like machine learning, the historical data is often collected and transformed by using techniques similar to the ones of event processing, e.g., filtering, correlating the data, and so on. In this paper, we are going to examine both research areas and offer a survey on terminology, research achievements, existing solutions, and open issues. We discuss the applicability of the research areas to the telecommunication domain. We primarily base our survey on articles published in international conferences and journals, but we consider other sources of information as well, like technical reports, tools or web-logs.",
"title": ""
},
{
"docid": "7210c2e82441b142f722bcc01bfe9aca",
"text": "In the beginning of the last decade, agile methodologies emerged as a response to software development processes that were based on rigid approaches. In fact, the flexible characteristics of agile methods are expected to be suitable to the less-defined and uncertain nature of software development. However, many studies in this area lack empirical evaluation in order to provide more confident evidences about which contexts the claims are true. This paper reports an empirical study performed to analyze the impact of Scrum adoption on customer satisfaction as an external success perspective for software development projects in a software intensive organization. The study uses data from real-life projects executed in a major software intensive organization located in a nation wide software ecosystem. The empirical method applied was a cross-sectional survey using a sample of 19 real-life software development projects involving 156 developers. The survey aimed to determine whether there is any impact on customer satisfaction caused by the Scrum adoption. However, considering that sample, our results indicate that it was not possible to establish any evidence that using Scrum may help to achieve customer satisfaction and, consequently, increase the success rates in software projects, in contrary to general claims made by Scrum's advocates.",
"title": ""
},
{
"docid": "f0242a2a54b1c4538abdd374c74f69f6",
"text": "Background: An increasing research effort has devoted to just-in-time (JIT) defect prediction. A recent study by Yang et al. at FSE'16 leveraged individual change metrics to build unsupervised JIT defect prediction model. They found that many unsupervised models performed similarly to or better than the state-of-the-art supervised models in effort-aware JIT defect prediction. Goal: In Yang et al.'s study, code churn (i.e. the change size of a code change) was neglected when building unsupervised defect prediction models. In this study, we aim to investigate the effectiveness of code churn based unsupervised defect prediction model in effort-aware JIT defect prediction. Methods: Consistent with Yang et al.'s work, we first use code churn to build a code churn based unsupervised model (CCUM). Then, we evaluate the prediction performance of CCUM against the state-of-the-art supervised and unsupervised models under the following three prediction settings: cross-validation, time-wise cross-validation, and cross-project prediction. Results: In our experiment, we compare CCUM against the state-of-the-art supervised and unsupervised JIT defect prediction models. Based on six open-source projects, our experimental results show that CCUM performs better than all the prior supervised and unsupervised models. Conclusions: The result suggests that future JIT defect prediction studies should use CCUM as a baseline model for comparison when a novel model is proposed.",
"title": ""
},
{
"docid": "96c3c7f605f7ca763df0710629edd726",
"text": "This study underlines the importance of cinnamon, a widely-used food spice and flavoring material, and its metabolite sodium benzoate (NaB), a widely-used food preservative and a FDA-approved drug against urea cycle disorders in humans, in increasing the levels of neurotrophic factors [e.g., brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NT-3)] in the CNS. NaB, but not sodium formate (NaFO), dose-dependently induced the expression of BDNF and NT-3 in primary human neurons and astrocytes. Interestingly, oral administration of ground cinnamon increased the level of NaB in serum and brain and upregulated the levels of these neurotrophic factors in vivo in mouse CNS. Accordingly, oral feeding of NaB, but not NaFO, also increased the level of these neurotrophic factors in vivo in the CNS of mice. NaB induced the activation of protein kinase A (PKA), but not protein kinase C (PKC), and H-89, an inhibitor of PKA, abrogated NaB-induced increase in neurotrophic factors. Furthermore, activation of cAMP response element binding (CREB) protein, but not NF-κB, by NaB, abrogation of NaB-induced expression of neurotrophic factors by siRNA knockdown of CREB and the recruitment of CREB and CREB-binding protein to the BDNF promoter by NaB suggest that NaB exerts its neurotrophic effect through the activation of CREB. Accordingly, cinnamon feeding also increased the activity of PKA and the level of phospho-CREB in vivo in the CNS. These results highlight a novel neutrophic property of cinnamon and its metabolite NaB via PKA – CREB pathway, which may be of benefit for various neurodegenerative disorders.",
"title": ""
},
{
"docid": "20adf89d9301cdaf64d8bf684886de92",
"text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.",
"title": ""
},
{
"docid": "2d4c99f3ff7a19580f9f012da99a8348",
"text": "OBJECTIVES\nTo compare the effectiveness of a mixture of acacia fiber, psyllium fiber, and fructose (AFPFF) with polyethylene glycol 3350 combined with electrolytes (PEG+E) in the treatment of children with chronic functional constipation (CFC); and to evaluate the safety and effectiveness of AFPFF in the treatment of children with CFC.\n\n\nSTUDY DESIGN\nThis was a randomized, open label, prospective, controlled, parallel-group study involving 100 children (M/F: 38/62; mean age ± SD: 6.5 ± 2.7 years) who were diagnosed with CFC according to the Rome III Criteria. Children were randomly divided into 2 groups: 50 children received AFPFF (16.8 g daily) and 50 children received PEG+E (0.5 g/kg daily) for 8 weeks. Primary outcome measures were frequency of bowel movements, stool consistency, fecal incontinence, and improvement of other associated gastrointestinal symptoms. Safety was assessed with evaluation of clinical adverse effects and growth measurements.\n\n\nRESULTS\nCompliance rates were 72% for AFPFF and 96% for PEG+E. A significant improvement of constipation was seen in both groups. After 8 weeks, 77.8% of children treated with AFPFF and 83% of children treated with PEG+E had improved (P = .788). Neither PEG+E nor AFPFF caused any clinically significant side effects during the entire course of the study period.\n\n\nCONCLUSIONS\nIn this randomized study, we did not find any significant difference between the efficacy of AFPFF and PEG+E in the treatment of children with CFC. Both medications were proved to be safe for CFC treatment, but PEG+E was better accepted by children.",
"title": ""
},
{
"docid": "61096a0d1e94bb83f7bd067b06d69edd",
"text": "A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of “overfitting”, defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, “slow” convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the normalized weight matrix at each layer of a deep network converges to a minimum norm solution (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for predicting the generalization performance of different zero minimizers of the empirical loss. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 80 6. 11 37 9v 1 [ cs .L G ] 2 9 Ju n 20 18 Theory IIIb: Generalization in Deep Networks Tomaso Poggio ∗1, Qianli Liao1, Brando Miranda1, Andrzej Banburski1, Xavier Boix1, and Jack Hidary2 1Center for Brains, Minds and Machines, MIT 2Alphabet (Google) X",
"title": ""
},
{
"docid": "76ede41b63f6c960729228c505026851",
"text": "Although the hip musculature is found to be very important in connecting the core to the lower extremities and in transferring forces from and to the core, it is proposed to leave the hip musculature out of consideration when talking about the concept of core stability. A low level of co-contraction of the trunk muscles is important for core stability. It provides a level of stiffness, which gives sufficient stability against minor perturbations. Next to this stiffness, direction-specific muscle reflex responses are also important in providing core stability, particularly when encountering sudden perturbations. It appears that most trunk muscles, both the local and global stabilization system, must work coherently to achieve core stability. The contributions of the various trunk muscles depend on the task being performed. In the search for a precise balance between the amount of stability and mobility, the role of sensory-motor control is much more important than the role of strength or endurance of the trunk muscles. The CNS creates a stable foundation for movement of the extremities through co-contraction of particular muscles. Appropriate muscle recruitment and timing is extremely important in providing core stability. No clear evidence has been found for a positive relationship between core stability and physical performance and more research in this area is needed. On the other hand, with respect to the relationship between core stability and injury, several studies have found an association between a decreased stability and a higher risk of sustaining a low back or knee injury. Subjects with such injuries have been shown to demonstrate impaired postural control, delayed muscle reflex responses following sudden trunk unloading and abnormal trunk muscle recruitment patterns. In addition, various relationships have been demonstrated between core stability, balance performance and activation characteristics of the trunk muscles. Most importantly, a significant correlation was found between poor balance performance in a sitting balance task and delayed firing of the trunk muscles during sudden perturbation. It was suggested that both phenomena are caused by proprioceptive deficits. The importance of sensory-motor control has implications for the development of measurement and training protocols. It has been shown that challenging propriocepsis during training activities, for example, by making use of unstable surfaces, leads to increased demands on trunk muscles, thereby improving core stability and balance. Various tests to directly or indirectly measure neuromuscular control and coordination have been developed and are discussed in the present article. Sitting balance performance and trunk muscle response times may be good indicators of core stability. In light of this, it would be interesting to quantify core stability using a sitting balance task, for example by making use of accelerometry. Further research is required to develop training programmes and evaluation methods that are suitable for various target groups.",
"title": ""
},
{
"docid": "ce32b34898427802abd4cc9c99eac0bc",
"text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.",
"title": ""
},
{
"docid": "9504571e66ea9071c6c227f61dfba98f",
"text": "Recent research has shown that although Reinforcement Learning (RL) can benefit from expert demonstration, it usually takes considerable efforts to obtain enough demonstration. The efforts prevent training decent RL agents with expert demonstration in practice. In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training. Under the framework, we propose Active Deep Q-Network, a novel query strategy which adapts to the dynamically-changing distributions during the RL training process by estimating the uncertainty of recent states. The expert demonstration data within Active DQN are then utilized by optimizing supervised max-margin loss in addition to temporal difference loss within usual DQN training. We propose two methods of estimating the uncertainty based on two state-of-the-art DQN models, namely the divergence of bootstrapped DQN and the variance of noisy DQN. The empirical results validate that both methods not only learn faster than other passive expert demonstration methods with the same amount of demonstration and but also reach super-expert level of performance across four different tasks.",
"title": ""
},
{
"docid": "1c9eb6b002b36e2607cc63e08151ee65",
"text": "Qualitative trend analysis (QTA) is a process-history-based data-driven technique that works by extracting important features (trends) from the measured signals and evaluating the trends. QTA has been widely used for process fault detection and diagnosis. Recently, Dash et al. (2001, 2003) presented an intervalhalving-based algorithm for off-line automatic trend extraction from a record of data, a fuzzy-logic based methodology for trend-matching and a fuzzy-rule-based framework for fault diagnosis (FD). In this article, an algorithm for on-line extraction of qualitative trends is proposed. A framework for on-line fault diagnosis using QTA also has been presented. Some of the issues addressed are (i) development of a robust and computationally efficient QTA-knowledge-base, (ii) fault detection, (iii) estimation of the fault occurrence time, (iv) on-line trend-matching and (v) updating the QTA-knowledge-base when a novel fault is diagnosed manually. Some results for FD of the Tennessee Eastman (TE) process using the developed framework are presented. Copyright c 2003 IFAC.",
"title": ""
},
{
"docid": "490114176c31592da4cac2bcf75f31f3",
"text": "In this letter, we present a compact ultrawideband (UWB) antenna printed on a 50.8-μm Kapton polyimide substrate. The antenna is fed by a linearly tapered coplanar waveguide (CPW) that provides smooth transitional impedance for improved matching. The proposed design is tuned to cover the 2.2-14.3-GHz frequency range that encompasses both the 2.45-GHz Industrial, Scientific, Medical (ISM) band and the standard 3.1-10.6-GHz UWB band. Furthermore, the antenna is compared to a conventional CPW-fed antenna to demonstrate the significance of the proposed design. A parametric study is first performed on the feed of the proposed design to achieve the desired impedance matching. Next, a prototype is fabricated; measurement results show good agreement with the simulated model. Moreover, the antenna demonstrates a very low susceptibility to performance degradation due to bending effects in terms of impedance matching and far-field radiation patterns, which makes it suitable for integration within modern flexible electronic devices.",
"title": ""
},
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
},
{
"docid": "7419fa101c2471e225c976da196ed813",
"text": "A 4×40 Gb/s collaborative digital CDR is implemented in 28nm CMOS. The CDR is capable of recovering a low jitter clock from a partially-equalized or un-equalized eye by using a phase detection scheme that inherently filters out ISI edges. The CDR uses split feedback that simultaneously allows wider bandwidth and lower recovered clock jitter. A shared frequency tracking is also introduced that results in lower periodic jitter. Combining these techniques the CDR recovers a 10GHz clock from an eye containing 0.8UIpp DDJ and still achieves 1-10 MHz of tracking bandwidth while adding <; 300fs of jitter. Per lane CDR occupies only .06 mm2 and consumes 175 mW.",
"title": ""
},
{
"docid": "2f60e3d89966d4680796c1e4355de4bc",
"text": "This letter addresses the problem of energy detection of an unknown signal over a multipath channel. It starts with the no-diversity case, and presents some alternative closed-form expressions for the probability of detection to those recently reported in the literature. Detection capability is boosted by implementing both square-law combining and square-law selection diversity schemes",
"title": ""
},
{
"docid": "956ffd90cc922e77632b8f9f79f42a98",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
},
{
"docid": "d7ec0f978b066686edf9b930492dae71",
"text": "The association between MMORPG play (World of Warcraft) and psychological wellbeing was explored through a cross sectional, online questionnaire design testing the relationship between average hours playing per week and psychological wellbeing. Play motivation including achievement, social interaction and immersion as well as problematic use were tested as mediating variables. Participants (N = 565) completed online measures including demographics and play time, health, motivations to play and problematic use. Analysis revealed a negative correlation between playing time and psychological wellbeing. A Multiple Mediation Model showed the relationship specifically occurred where play was motivated by Immersion and/or where play was likely to have become problematic. No evidence of a direct effect of play on psychological wellbeing was found when taking these mediating pathways into account. Clinical and research implications are discussed.",
"title": ""
}
] |
scidocsrr
|
728b67b9387e9c182a914936ed0c9f88
|
Tree-based Bayesian Mixture Model for Competing Risks
|
[
{
"docid": "56db9e027eb9ca536a2ef8cec9b53beb",
"text": "Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false discovery rate (FDR), which is loosely defined to be the expected proportion of false positives among all significant hypotheses. The FDR is especially appropriate for exploratory analyses in which one is interested in finding several significant results among many tests. In this work, we introduce a modified version of the FDR called the “positive false discovery rate” (pFDR). We discuss the advantages and disadvantages of the pFDR and investigate its statistical properties. When assuming the test statistics follow a mixture distribution, we show that the pFDR can be written as a Bayesian posterior probability and can be connected to classification theory. These properties remain asymptotically true under fairly general conditions, even under certain forms of dependence. Also, a new quantity called the “q-value” is introduced and investigated, which is a natural “Bayesian posterior p-value,” or rather the pFDR analogue of the p-value.",
"title": ""
}
] |
[
{
"docid": "271f6291ab2c97b5e561cf06b9131f9d",
"text": "Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.",
"title": ""
},
{
"docid": "89318aa5769daa08a67ae7327c458e8e",
"text": "The present thesis is concerned with the development and evaluation (in terms of accuracy and utility) of systems using hand postures and hand gestures for enhanced Human-Computer Interaction (HCI). In our case, these systems are based on vision techniques, thus only requiring cameras, and no other specific sensors or devices. When dealing with hand movements, it is necessary to distinguish two aspects of these hand movements : the static aspect and the dynamic aspect. The static aspect is characterized by a pose or configuration of the hand in an image and is related to the Hand Posture Recognition (HPR) problem. The dynamic aspect is defined either by the trajectory of the hand, or by a series of hand postures in a sequence of images. This second aspect is related to the Hand Gesture Recognition (HGR) task. Given the recognized lack of common evaluation databases in the HGR field, a first contribution of this thesis was the collection and public distribution of two databases, containing both oneand two-handed gestures, which part of the results reported here will be based upon. On these databases, we compare two state-of-the-art models for the task of HGR. As a second contribution, we propose a HPR technique based on a new feature extraction. This method has the advantage of being faster than conventional methods while yielding good performances. In addition, we provide comparison results of this method with other state-of-the-art technique. Finally, the most important contribution of this thesis lies in the thorough study of the state-of-the-art not only in HGR and HPR but also more generally in the field of HCI. The first chapter of the thesis provides an extended study of the state-of-the-art. The second chapter of this thesis contributes to HPR. We propose to apply for HPR a technique employed with success for face detection. This method is based on the Modified Census Transform (MCT) to extract relevant features in images. We evaluate this technique on an existing benchmark database and provide comparison results with other state-of-the-art approaches. The third chapter is related to HGR. In this chapter we describe the first recorded database, containing both oneand two-handed gestures in the 3D space. We propose to compare two models used with success in HGR, namely Hidden Markov Models (HMM) and Input-Output Hidden Markov Model (IOHMM). The fourth chapter is also focused on HGR but more precisely on two-handed gesture recognition. For that purpose, a second database has been recorded using two cameras. The goal of these gestures is to manipulate virtual objects on a screen. We propose to investigate on this second database the state-of-the-art sequence processing techniques we used in the previous chapter. We then discuss the results obtained using different features, and using images of one or two cameras. In conclusion, we propose a method for HPR based on new feature extraction. For HGR, we provide two databases and comparison results of two major sequence processing techniques. Finally, we present a complete survey on recent state-of-the-art techniques for both HPR and HGR. We also present some possible applications of these techniques, applied to two-handed gesture interaction. We hope this research will open new directions in the field of hand posture and gesture recognition.",
"title": ""
},
{
"docid": "4f323f6591079882eed52a1549f6e66a",
"text": "General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, the results suggest that our proposed screen capture learning agent has the potential to learn many different games using only a single learning algorithm.",
"title": ""
},
{
"docid": "5b84008df77e2ff8929cd759ae92de7d",
"text": "Purpose – Organizations invest in enterprise systems (ESs) with an expectation to share digital information from disparate sources to improve organizational effectiveness. This study aims to examine how organizations realize digital business strategies using an ES. It does so by evaluating the ES data support activities for knowledge creation, particularly how ES data are transformed into corporate knowledge in relevance to business strategies sought. Further, how this knowledge leads to realization of the business benefits. The linkage between establishing digital business strategy, utilization of ES data in decision-making processes, and realized or unrealized benefits provides the reason for this study. Design/methodology/approach – This study develops and utilizes a transformational model of how ES data are transformed into knowledge and results to evaluate the role of digital business strategies in achieving benefits using an ES. Semi-structured interviews are first conducted with ES vendors, consultants and IT research firms to understand the process of ES data transformation for realizing business strategies from their perspective. This is followed by three in-depth cases (two large and one medium-sized organization) who have implemented ESs. The empirical data are analyzed using the condensation approach. This method condenses the data into multiple groups according to pre-defined categories, which follow the scope of the research questions. Findings – The key findings emphasize that strategic benefit realization from an ES implementation is a holistic process that not only includes the essential data and technology factors, but also includes factors such as digital business strategy deployment, people and process management, and skills and competency development. Although many companies are mature with their ES implementation, these firms have only recently started aligning their ES capabilities with digital business strategies correlating data, decisions, and actions to maximize business value from their ES investment. Research limitations/implications – The findings reflect the views of two large and one mediumsized organization in the manufacturing sector. Although the evidence of the benefit realization process success and its results is more prominent in larger organizations than medium-sized, it may not be generalized that smaller firms cannot achieve these results. Exploration of these aspects in smaller firms or a different industry sector such as retail/service would be of value. Practical implications – The paper highlights the importance of tools and practices for accessing relevant information through an integrated ES so that competent decisions can be established towards achieving digital business strategies, and optimizing organizational performance. Knowledge is a key factor in this process. Originality/value – The paper evaluates a holistic framework for utilization of ES data in realizing digital business strategies. Thus, it develops an enhanced transformational cycle model for ES data transformation into knowledge and results, which maintains to build up the transformational process success in the long term.",
"title": ""
},
{
"docid": "301fc0a18bec8128165ec73e15e66eb1",
"text": "data structure queries (A). Some queries check properties of abstract data struct [11][131] such as stacks, hash tables, trees, and so on. These queries are not domain because the data structures can hold data of any domain. These queries are also differ the programming construct queries, because they check the constraints of well-defined a data structures. For example, a query about a binary tree may find the number of its nod have only one child. On the other hand, programming construct queries usually span di data structures. Abstract data structure queries can usually be expressed as class invar could be packaged with the class that implements an ADT. However, the queries that p information rather than detect violations are best answered by dynamic queries. For ex monitoring B+ trees using queries may indicate whether this data structure is efficient f underlying problem. Program construct queries (P). Program construct queries verify object relationships that related to the program implementation and not directly to the problem domain. Such q verify and visualize groups of objects that have to conform to some constraints because lower level of program design and implementation. For example, in a graphical user int implementation, every window object has a parent window, and this window referenc children widgets through the widget_collection collection (section 5.2.2). Such construct is n",
"title": ""
},
{
"docid": "32775ba6d1a26274eaa6ce92513d9850",
"text": "Data reduction plays an important role in machine learning and pattern recognition with a high-dimensional data. In real-world applications data usually exists with hybrid formats, and a unified data reducing technique for hybrid data is desirable. In this paper, an information measure is proposed to computing discernibility power of a crisp equivalence relation or a fuzzy one, which is the key concept in classical rough set model and fuzzy-rough set model. Based on the information measure, a general definition of significance of nominal, numeric and fuzzy attributes is presented. We redefine the independence of hybrid attribute subset, reduct, and relative reduct. Then two greedy reduction algorithms for unsupervised and supervised data dimensionality reduction based on the proposed information measure are constructed. Experiments show the reducts found by the proposed algorithms get a better performance compared with classical rough set approaches. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "827c9d65c2c3a2a39d07c9df7a21cfe2",
"text": "A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the \"glue factor\" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research.",
"title": ""
},
{
"docid": "accad42ca98cd758fd1132e51942cba8",
"text": "The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.",
"title": ""
},
{
"docid": "ee0a1a7c7a8f2c42969b2beb09d7f94e",
"text": "Currently, electric vehicle technology is becoming more and more mature. Although the anti-lock braking system (ABS) has been commonly applied, most electric vehicles (EVs) still use traditional hydraulic-based disc braking, which has the drawbacks that vehicle wheels are is easy to skid in the rainy day, and easy to be abraded during emergency brake. As a novel method of braking, regenerative braking has the advantages of compact structure, sensitive response, reliability and controllable braking distance. In this research task, a regenerative driving and braking control system for EVs with satisfactory braking performance is proposed. When braking, a motor is converted into a generator-the acquired energy can be used to generate reverse magnetic braking torque with fast response. On this basis, an anti-lock braking controller is realize. A PID controller is also designed to drive the motor and a fuzzy slip ratio controller is designed and used to obtain the optimal slip ratio. Finally, real-world experiments are conducted to verify the proposed method.",
"title": ""
},
{
"docid": "b146013415b3ca19eee9ffef15155fe4",
"text": "48 nm pitch dual damascene interconnects are patterned and filled with ruthenium. Ru interconnect has comparable high yield for line and via macros. Electrical results show minimal impact for via resistance and around 2 times higher line resistance. Resistivity and cross section area of Ru interconnects are measured by temperature coefficient of resistivity method and the area was verified by TEM. Reliability results show non-failure in electromigration and longer time dependent dielectric breakdown. Based on the data collected, Ru could be a metallization contender at linewidth of 16 nm and below.",
"title": ""
},
{
"docid": "96ee31337d66b8ccd3876c1575f9b10c",
"text": "Although different modeling techniques have been proposed during the last 300 years, the differential equation formalism proposed by Newton and Leibniz has been the tool of choice for modeling and problem solving Taylor (1996); Wainer (2009). Differential equations provide a formal mathematical method (sometimes also called an analytical method) for studying the entity of interest. Computational methods based on differential equations could not be easily applied in studying human-made dynamic systems (e.g., traffic controllers, robotic arms, automated factories, production plants, computer networks, VLSI circuits). These systems are usually referred to as discrete event systems because their states do not change continuously but, rather, because of the occurrence of events. This makes them asynchronous, inherently concurrent, and highly nonlinear, rendering their modeling and simulation different from that used in traditional approaches. In order to improve the model definition for this class of systems, a number of techniques were introduced, including Petri Nets, Finite State Machines, min-max algebra, Timed Automata, etc. Banks & Nicol. (2005); Cassandras (1993); Cellier & Kofman. (2006); Fishwick (1995); Law & Kelton (2000); Toffoli & Margolus. (1987). Wireless Sensor Network (WSN) is a discrete event system which consists of a network of sensor nodes equipped with sensing, computing, power, and communication modules to monitor certain phenomenon such as environmental data or object tracking Zhao & Guibas (2004). Emerging applications of wireless sensor networks are comprised of asset and warehouse *madani@ciit.net.pk †jawhaikaz@ciit.net.pk ‡mahlknecht@ict.tuwien.ac.at 1",
"title": ""
},
{
"docid": "b91c387335e7f63b720525d0ee28dbd6",
"text": "Road condition acquisition and assessment are the key to guarantee their permanent availability. In order to maintain a country's whole road network, millions of high-resolution images have to be analyzed annually. Currently, this requires cost and time excessive manual labor. We aim to automate this process to a high degree by applying deep neural networks. Such networks need a lot of data to be trained successfully, which are not publicly available at the moment. In this paper, we present the GAPs dataset, which is the first freely available pavement distress dataset of a size, large enough to train high-performing deep neural networks. It provides high quality images, recorded by a standardized process fulfilling German federal regulations, and detailed distress annotations. For the first time, this enables a fair comparison of research in this field. Furthermore, we present a first evaluation of the state of the art in pavement distress detection and an analysis of the effectiveness of state of the art regularization techniques on this dataset.",
"title": ""
},
{
"docid": "8c95392ab3cc23a7aa4f621f474d27ba",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "29c91c8d6f7faed5d23126482a2f553b",
"text": "In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods.",
"title": ""
},
{
"docid": "415c43b39543f2889eca11cbc3669784",
"text": "The fabrication of electronic devices based on organic materials, known as ’printed electronics’, is an emerging technology due to its unprecedented advantages involving fl exibility, light weight, and portability, which will ultimately lead to future ubiquitous applications. [ 1 ] The solution processability of semiconducting and metallic polymers enables the cost-effective fabrication of optoelectronic devices via high-throughput printing techniques. [ 2 ] These techniques require high-performance fl exible and transparent electrodes (FTEs) fabricated on plastic substrates, but currently, they depend on indium tin oxide (ITO) coated on plastic substrates. However, its intrinsic mechanical brittleness and inferior physical properties arising from lowtemperature ( T ) processing below the melting T of the plastic substrates (i.e., typically below 150 °C) have increased the demand for alternative FTE materials. [ 3 ]",
"title": ""
},
{
"docid": "dd9f6ef9eafdef8b29c566bcea8ded57",
"text": "A recent trend in saliency algorithm development is large-scale benchmarking and algorithm ranking with ground truth provided by datasets of human fixations. In order to accommodate the strong bias humans have toward central fixations, it is common to replace traditional ROC metrics with a shuffled ROC metric which uses randomly sampled fixations from other images in the database as the negative set. However, the shuffled ROC introduces a number of problematic elements, including a fundamental assumption that it is possible to separate visual salience and image spatial arrangement. We argue that it is more informative to directly measure the effect of spatial bias on algorithm performance rather than try to correct for it. To capture and quantify these known sources of bias, we propose a novel metric for measuring saliency algorithm performance: the spatially binned ROC (spROC). This metric provides direct in-sight into the spatial biases of a saliency algorithm without sacrificing the intuitive raw performance evaluation of traditional ROC measurements. By quantitatively measuring the bias in saliency algorithms, researchers will be better equipped to select and optimize the most appropriate algorithm for a given task. We use a baseline measure of inherent algorithm bias to show that Adaptive Whitening Saliency (AWS) [14], Attention by Information Maximization (AIM) [8], and Dynamic Visual Attention (DVA) [20] provide the least spatially biased results, suiting them for tasks in which there is no information about the underlying spatial bias of the stimuli, whereas algorithms such as Graph Based Visual Saliency (GBVS) [18] and Context-Aware Saliency (CAS) [15] have a significant inherent central bias.",
"title": ""
},
{
"docid": "00bcce935ca2e4d443941b7e90d644c9",
"text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.",
"title": ""
},
{
"docid": "5fe036906302ab4131c7f9afc662df3f",
"text": "Plant peptide hormones play an important role in regulating plant developmental programs via cell-to-cell communication in a non-cell autonomous manner. To characterize the biological relevance of C-TERMINALLY ENCODED PEPTIDE (CEP) genes in rice, we performed a genome-wide search against public databases using a bioinformatics approach and identified six additional CEP members. Expression analysis revealed a spatial-temporal pattern of OsCEP6.1 gene in different tissues and at different developmental stages of panicle. Interestingly, the expression level of the OsCEP6.1 was also significantly up-regulated by exogenous cytokinin. Application of a chemically synthesized 15-amino acid OsCEP6.1 peptide showed that OsCEP6.1 had a negative role in regulating root and seedling growth, which was further confirmed by transgenic lines. Furthermore, the constitutive expression of OsCEP6.1 was sufficient to lead to panicle architecture and grain size variations. Scanning electron microscopy analysis revealed that the phenotypic variation of OsCEP6.1 overexpression lines resulted from decreased cell size but not reduced cell number. Moreover, starch accumulation was not significantly affected. Taken together, these data suggest that the OsCEP6.1 peptide might be involved in regulating the development of panicles and grains in rice.",
"title": ""
},
{
"docid": "af3e8e26ec6f56a8cd40e731894f5993",
"text": "Probiotic bacteria are sold mainly in fermented foods, and dairy products play a predominant role as carriers of probiotics. These foods are well suited to promoting the positive health image of probiotics for several reasons: 1) fermented foods, and dairy products in particular, already have a positive health image; 2) consumers are familiar with the fact that fermented foods contain living microorganisms (bacteria); and 3) probiotics used as starter organisms combine the positive images of fermentation and probiotic cultures. When probiotics are added to fermented foods, several factors must be considered that may influence the ability of the probiotics to survive in the product and become active when entering the consumer's gastrointestinal tract. These factors include 1) the physiologic state of the probiotic organisms added (whether the cells are from the logarithmic or the stationary growth phase), 2) the physical conditions of product storage (eg, temperature), 3) the chemical composition of the product to which the probiotics are added (eg, acidity, available carbohydrate content, nitrogen sources, mineral content, water activity, and oxygen content), and 4) possible interactions of the probiotics with the starter cultures (eg, bacteriocin production, antagonism, and synergism). The interactions of probiotics with either the food matrix or the starter culture may be even more intensive when probiotics are used as a component of the starter culture. Some of these aspects are discussed in this article, with an emphasis on dairy products such as milk, yogurt, and cheese.",
"title": ""
},
{
"docid": "6c31a285d3548bfb6cbe9ea72f0d5192",
"text": "PURPOSE\nTo compare the effects of a 10-week training program with two different exercises -- traditional hamstring curl (HC) and Nordic hamstrings (NH), a partner exercise focusing the eccentric phase -- on muscle strength among male soccer players.\n\n\nMETHODS\nSubjects were 21 well-trained players who were randomized to NH training (n = 11) or HC training (n = 10). The programs were similar, with a gradual increase in the number of repetitions from two sets of six reps to three sets of eight to 12 reps over 4 weeks, and then increasing load during the final 6 weeks of training. Strength was measured as maximal torque on a Cybex dynamometer before and after the training period.\n\n\nRESULTS\nIn the NH group, there was an 11% increase in eccentric hamstring torque measured at 60 degrees s(-1), as well as a 7% increase in isometric hamstring strength at 90 degrees, 60 degrees and 30 degrees of knee flexion. Since there was no effect on concentric quadriceps strength, there was a significant increase in the hamstrings:quadriceps ratio from 0.89 +/- 0.12 to 0.98 +/- 0.17 (11%) in the NH group. No changes were observed in the HC group.\n\n\nCONCLUSION\nNH training for 10 weeks more effectively develops maximal eccentric hamstring strength in well-trained soccer players than a comparable program based on traditional HC.",
"title": ""
}
] |
scidocsrr
|
551ac2dfdd05c6885d65f68b4039181b
|
Background subtraction and human detection in outdoor videos using fuzzy logic
|
[
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
}
] |
[
{
"docid": "e269585a133a138b2ba11c7fb2d025ec",
"text": "Concept and design of a low cost two-axes MEMS scanning mirror with an aperture size of 7 millimetres for a compact automotive LIDAR sensor is presented. Hermetic vacuum encapsulation and stacked vertical comb drives are the key features to enable a large tilt angle of 15 degrees. A tripod MEMS mirror design provides an advantageous ratio of mirror aperture and chip size and allows circular laser scanning.",
"title": ""
},
{
"docid": "e5abde9ecd6e50c60306411fc011db2d",
"text": "We present a user study for two different automatic strategies that simplify text content for people with dyslexia. The strategies considered are the standard one (replacing a complex word with the most simpler synonym) and a new one that presents several synonyms for a complex word if the user requests them. We compare texts transformed by both strategies with the original text and to a gold standard manually built. The study was undertook by 96 participants, 47 with dyslexia plus a control group of 49 people without dyslexia. To show device independence, for the new strategy we used three different reading devices. Overall, participants with dyslexia found texts presented with the new strategy significantly more readable and comprehensible. To the best of our knowledge, this is the largest user study of its kind.",
"title": ""
},
{
"docid": "b12947614198d639aef0d3a26b83a215",
"text": "In the era of mobile Internet, mobile operators are facing pressure on ever-increasing capital expenditures and operating expenses with much less growth of income. Cloud Radio Access Network (C-RAN) is expected to be a candidate of next generation access network techniques that can solve operators' puzzle. In this article, on the basis of a general survey of C-RAN, we present a novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane. Compared to traditional architecture, the proposed C-RAN architecture emphasizes the notion of service cloud, service-oriented resource scheduling and management, thus it facilitates the utilization of new communication and computer techniques. With the extensive computation resource offered by the cloud platform, a coordinated user scheduling algorithm and parallel optimum precoding scheme are proposed, which can achieve better performance. The proposed scheme opens another door to design new algorithms matching well with C-RAN architecture, instead of only migrating existing algorithms from traditional architecture to C-RAN.",
"title": ""
},
{
"docid": "e485aca373cf4543e1a8eeadfa0e6772",
"text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.",
"title": ""
},
{
"docid": "598dbf48c54bcea6e74d85a8393dada1",
"text": "With the fast development of social media, the information overload problem becomes increasingly severe and recommender systems play an important role in helping online users find relevant information by suggesting information of potential interests. Social activities for online users produce abundant social relations. Social relations provide an independent source for recommendation, presenting both opportunities and challenges for traditional recommender systems. Users are likely to seek suggestions from both their local friends and users with high global reputations, motivating us to exploit social relations from local and global perspectives for online recommender systems in this paper. We develop approaches to capture local and global social relations, and propose a novel framework LOCABAL taking advantage of both local and global social context for recommendation. Empirical results on real-world datasets demonstrate the effectiveness of our proposed framework and further experiments are conducted to understand how local and global social context work for the proposed framework.",
"title": ""
},
{
"docid": "e81f197acf7e3b7590d93481a4a4b5b3",
"text": "Naive T cells have long been regarded as a developmentally synchronized and fairly homogeneous and quiescent cell population, the size of which depends on age, thymic output and prior infections. However, there is increasing evidence that naive T cells are heterogeneous in phenotype, function, dynamics and differentiation status. Current strategies to identify naive T cells should be adjusted to take this heterogeneity into account. Here, we provide an integrated, revised view of the naive T cell compartment and discuss its implications for healthy ageing, neonatal immunity and T cell reconstitution following haematopoietic stem cell transplantation. Evidence is increasing that naive T cells are heterogeneous in phenotype, function, dynamics and differentiation status. Here, van den Broek et al. provide a revised view of the naive T cell compartment and then discuss the implications for ageing, neonatal immunity and T cell reconstitution following haematopoietic stem cell transplantation.",
"title": ""
},
{
"docid": "4fa0a60eb5ae8bd84e4a88c6eada4af4",
"text": "Image retrieval can be considered as a classification problem. Classification is usually based on some image features. In the feature extraction image segmentation is commonly used. In this paper we introduce a new feature for image classification for retrieval purposes. This feature is based on the gray level histogram of the image. The feature is called binary histogram and it can be used for image classification without segmentation. Binary histogram can be used for image retrieval as such by using similarity calculation. Another approach is to extract some features from it. In both cases indexing and retrieval do not require much computational time. We test the similarity measurement and the feature-based retrieval by making classification experiments. The proposed features are tested using a set of paper defect images, which are acquired from an industrial imaging application.",
"title": ""
},
{
"docid": "b1f348ff63eaa97f6eeda5fcd81330a9",
"text": "The recent expansion of the cloud computing paradigm has motivated educators to include cloud-related topics in computer science and computer engineering curricula. While programming and algorithm topics have been covered in different undergraduate and graduate courses, cloud architecture/system topics are still not usually studied in academic contexts. But design, deployment and management of datacenters, virtualization technologies for cloud, cloud management tools and similar issues should be addressed in current computer science and computer engineering programs. This work presents our approach and experiences in designing and implementing a curricular module covering all these topics. In this approach the utilization of a simulation tool, CloudSim, is essential to allow the students a practical approximation to the course contents.",
"title": ""
},
{
"docid": "4ca4ccd53064c7a9189fef3e801612a0",
"text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.",
"title": ""
},
{
"docid": "1ebaa8de358a160024c07470dd48943a",
"text": "This study introduces and evaluates the robustness of different volumetric, sentiment, and social network approaches to predict the elections in three Asian countries – Malaysia, India, and Pakistan from Twitter posts. We find that predictive power of social media performs well for India and Pakistan but is not effective for Malaysia. Overall, we find that it is useful to consider the recency of Twitter posts while using it to predict a real outcome, such as an election result. Sentiment information mined using machine learning models was the most accurate predictor of election outcomes. Social network information is stable despite sudden surges in political discussions, for e.g. around electionsrelated news events. Methods combining sentiment and volume information, or sentiment and social network information, are effective at predicting smaller vote shares, for e.g. vote shares in the case of independent candidates and regional parties. We conclude with a detailed discussion on the caveats of social media analysis for predicting real-world outcomes and recommendations for future work. ARTICLE HISTORY Received 1 August 2017 Revised 12 February 2018 Accepted 12 March 2018",
"title": ""
},
{
"docid": "7fe86801de04054ffca61eb1b3334872",
"text": "Images rendered with traditional computer graphics techniques, such as scanline rendering and ray tracing, appear focused at all depths. However, there are advantages to having blur, such as adding realism to a scene or drawing attention to a particular place in a scene. In this paper we describe the optics underlying camera models that have been used in computer graphics, and present object space techniques for rendering with those models. In our companion paper [3], we survey image space techniques to simulate these models. These techniques vary in both speed and accuracy.",
"title": ""
},
{
"docid": "6df55b88150f5d52aa30ab770f464546",
"text": "OBJECTIVES\nThe objective of this study has been to review the incidence of biological and technical complications in case of tooth-implant-supported fixed partial denture (FPD) treatments on the basis of survival data regarding clinical cases.\n\n\nMATERIAL AND METHODS\nBased on the treatment documentations of a Bundeswehr dental clinic (Cologne-Wahn German Air Force Garrison), the medical charts of 83 patients with tooth-implant-supported FPDs were completely recorded. The median follow-up time was 4.73 (time range: 2.2-8.3) years. In the process, survival curves according to Kaplan and Meier were applied in addition to frequency counts.\n\n\nRESULTS\nA total of 84 tooth-implant (83 patients) connected prostheses were followed (132 abutment teeth, 142 implant abutments (Branemark, Straumann). FPDs: the time-dependent illustration reveals that after 5 years, as many as 10% of the tooth-implant-supported FPDs already had to be subjected to a technical modification (renewal (n=2), reintegration (n=4), veneer fracture (n=5), fracture of frame (n=2)). In contrast to non-rigid connection of teeth and implants, technical modification measures were rarely required in case of tooth-implant-supported FPDs with a rigid connection. There was no statistical difference between technical complications and the used implant system. Abutment teeth and implants: during the observation period, none of the functionally loaded implants (n=142) had to be removed. Three of the overall 132 abutment teeth were lost because of periodontal inflammation. The time-dependent illustration reveals, that after 5 years as many as 8% of the abutment teeth already required corresponding therapeutic measures (periodontal treatment (5%), filling therapy (2.5%), endodontic treatment (0.5%)). After as few as 3 years, the connection related complications of implant abutments (abutment or occlusal screw loosening, loss of cementation) already had to be corrected in approximately 8% of the cases. In the utilization period there was no screw or abutment fracture.\n\n\nCONCLUSION\nTechnical complications of implant-supported FPDs are dependent on the different bridge configurations. When using rigid functional connections, similarly favourable values will be achieved as in case of solely implant-supported FPDs. In this study other characteristics like different fixation systems (screwed vs. cemented) or various implant systems had no significant effect to the rate of technical complications.",
"title": ""
},
{
"docid": "751bde322930a292e2ddc8ba06e24f17",
"text": "Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.",
"title": ""
},
{
"docid": "768a8cfff3f127a61f12139466911a94",
"text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.",
"title": ""
},
{
"docid": "42d27f1a6ad81e13c449a08a6ada34d6",
"text": "Face detection of comic characters is a necessary step in most applications, such as comic character retrieval, automatic character classification and comic analysis. However, the existing methods were developed for simple cartoon images or small size comic datasets, and detection performance remains to be improved. In this paper, we propose a Faster R-CNN based method for face detection of comic characters. Our contribution is twofold. First, for the binary classification task of face detection, we empirically find that the sigmoid classifier shows a slightly better performance than the softmax classifier. Second, we build two comic datasets, JC2463 and AEC912, consisting of 3375 comic pages in total for characters face detection evaluation. Experimental results have demonstrated that the proposed method not only performs better than existing methods, but also works for comic images with different drawing styles.",
"title": ""
},
{
"docid": "0c8fb6cc1d252429c7e1dc5b01c14910",
"text": "We present a generative attribute controller (GAC), a novel functionality for generating or editing an image while intuitively controlling large variations of an attribute. This controller is based on a novel generative model called the conditional filtered generative adversarial network (CFGAN), which is an extension of the conventional conditional GAN (CGAN) that incorporates a filtering architecture into the generator input. Unlike the conventional CGAN, which represents an attribute directly using an observable variable (e.g., the binary indicator of attribute presence) so its controllability is restricted to attribute labeling (e.g., restricted to an ON or OFF control), the CFGAN has a filtering architecture that associates an attribute with a multi-dimensional latent variable, enabling latent variations of the attribute to be represented. We also define the filtering architecture and training scheme considering controllability, enabling the variations of the attribute to be intuitively controlled using typical controllers (radio buttons and slide bars). We evaluated our CFGAN on MNIST, CUB, and CelebA datasets and show that it enables large variations of an attribute to be not only represented but also intuitively controlled while retaining identity. We also show that the learned latent space has enough expressive power to conduct attribute transfer and attribute-based image retrieval.",
"title": ""
},
{
"docid": "6ad711fa60e05c8fb08b6f1c2c3a87d9",
"text": "An algorithm proposed by Dinic for finding maximum flows in networks and by Hopcroft and Karp for finding maximum bipartite matchings is applied to graph connectivity problems. It is shown that the algorithm requires 0(V<supscrpt>1/2</supscrpt>E) time to find a maximum set of node-disjoint paths in a graph, and 0(V<supscrpt>2/3</supscrpt>E) time to find a maximum set of edge disjoint paths. These bounds are tight. Thus the node connectivity of a graph may be tested in 0(V<supscrpt>5/2</supscrpt>E) time, and the edge connectivity of a graph may be tested in 0(V<supscrpt>5/3</supscrpt>E) time.",
"title": ""
},
{
"docid": "0102e5661220268902544401dedf70fc",
"text": "It was hypothesized that playfulness in adults relates positively to different indicators of subjective but also physical well-being. A sample of 255 adults completed subjective measures of playfulness along with self-ratings for different facets of well-being and the endorsement to enjoyable activities. Adult playfulness demonstrated robust positive relations with life satisfaction and an inclination to enjoyable activities and an active way of life. There were also minor positive relations with physical fitness. Leading an active way of life partially mediated the relation between playfulness and life satisfaction. The study provides further evidence on the contribution of adult playfulness to different aspects of well-being.",
"title": ""
},
{
"docid": "b16bb73155af7f141127617a7e9fdde1",
"text": "Organizing code into coherent programs and relating different programs to each other represents an underlying requirement for scaling genetic programming to more difficult task domains. Assuming a model in which policies are defined by teams of programs, in which team and program are represented using independent populations and coevolved, has previously been shown to support the development of variable sized teams. In this work, we generalize the approach to provide a complete framework for organizing multiple teams into arbitrarily deep/wide structures through a process of continuous evolution; hereafter the Tangled Program Graph (TPG). Benchmarking is conducted using a subset of 20 games from the Arcade Learning Environment (ALE), an Atari 2600 video game emulator. The games considered here correspond to those in which deep learning was unable to reach a threshold of play consistent with that of a human. Information provided to the learning agent is limited to that which a human would experience. That is, screen capture sensory input, Atari joystick actions, and game score. The performance of the proposed approach exceeds that of deep learning in 15 of the 20 games, with 7 of the 15 also exceeding that associated with a human level of competence. Moreover, in contrast to solutions from deep learning, solutions discovered by TPG are also very ‘sparse’. Rather than assuming that all of the state space contributes to every decision, each action in TPG is resolved following execution of a subset of an individual’s graph. This results in significantly lower computational requirements for model building than presently the case for deep learning.",
"title": ""
},
{
"docid": "78e4395a6bd6b4424813e20633d140b8",
"text": "This paper introduces a high-speed CMOS comparator. The comparator consists of a differential input stage, two regenerative flip-flops, and an S-R latch. No offset cancellation is exploited, which reduces the power consumption as well as the die area and increases the comparison speed. An experimental version of the comparator has been integrated in a standard double-poly double-metal 1.5-pm n-well process with a die area of only 140 x 100 pmz. This circuit, operating under a +2.5/– 2.5-V power supply, performs comparison to a precision of 8 b with a symmetrical input dynamic range of 2.5 V (therefore ~0.5 LSB resolution is equal to ~ 4.9 mV). input stage flip-flops S-R Iat",
"title": ""
}
] |
scidocsrr
|
2c554093795422c9e5d50673adcf88da
|
Information Retrieval as Statistical Translation
|
[
{
"docid": "6eb9d8f22237bdc49570e219150d50b4",
"text": "Researchers in both machine translation (e.g., Brown et a/, 1990) arm bilingual lexicography (e.g., Klavans and Tzoukermarm, 1990) have recently become interested in studying parallel texts (also known as bilingual corpora), bodies of text such as the Canadian Hansards (parliamentary debates) which are available in multiple languages (such as French and English). Much of the current excitement surrounding parallel texts was initiated by Brown et aL (1990), who outline a selforganizing method for using these parallel texts to build a machine translation system.",
"title": ""
}
] |
[
{
"docid": "44bd9d0b66cb8d4f2c4590b4cb724765",
"text": "AIM\nThis paper is a description of inductive and deductive content analysis.\n\n\nBACKGROUND\nContent analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method.\n\n\nDISCUSSION\nWhen using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge.\n\n\nCONCLUSION\nInductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.",
"title": ""
},
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "6620d6177ed14871321314f307746d85",
"text": "Global software engineering increases coordination, communication, and control challenges in software development. The testing phase in this context is not a widely researched subject. In this paper, we study the outsourcing of software testing in the Oulu area, research the ways in which it is used, and determine the observable benefits and obstacles. The companies that participated in this study were found to use the outsourcing possibility of software testing with good efficiency and their testing process was considered to be mature. The most common benefits, in addition to the companies' cost savings, included the utilization of time zone differences for around-the-clock productivity, a closer proximity to the market, an improved record of communication and the tools that record the audit materials. The most commonly realized difficulties consisted of teamwork challenges, a disparate tool infrastructure, tool expense, and often-elevated coordination costs. We utilized in our study two matrices that consist in one dimension of the three distances, control, coordination, and communication, and in another dimension of four distances, temporal, geographical, socio-cultural and technical. The technical distance was our extension to the matrix that has been used as the basis for many other studies about global software development and outsourcing efforts. Our observations justify the extension of matrices with respect to the technical distance.",
"title": ""
},
{
"docid": "97a2cc4cb07b0fbfb880984ca42d9553",
"text": "While today many online platforms employ complex algorithms to curate content, these algorithms are rarely highlighted in interfaces, preventing users from understanding these algorithms' operation or even existence. Here, we study how knowledgeable users are about these algorithms, showing that providing insight to users about an algorithm's existence or functionality through design facilitates rapid processing of the underlying algorithm models and increases users' engagement with the system. We also study algorithmic systems that might introduce bias to users' online experience to gain insight into users' behavior around biased algorithms. We will leverage these insights to build an algorithm-aware design that shapes a more informed interaction between users and algorithmic systems.",
"title": ""
},
{
"docid": "370c012ce6ebb22fe793a307b2a88abc",
"text": "In this paper, we present a novel approach to model arguments, their components and relations in persuasive essays in English. We propose an annotation scheme that includes the annotation of claims and premises as well as support and attack relations for capturing the structure of argumentative discourse. We further conduct a manual annotation study with three annotators on 90 persuasive essays. The obtained inter-rater agreement of αU = 0.72 for argument components and α = 0.81 for argumentative relations indicates that the proposed annotation scheme successfully guides annotators to substantial agreement. The final corpus and the annotation guidelines are freely available to encourage future research in argument recognition.",
"title": ""
},
{
"docid": "791314f5cee09fc8e27c236018a0927f",
"text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations",
"title": ""
},
{
"docid": "0c24b767705b3a88acf9fe128c0e3477",
"text": "The studied camera is basically just a line of pixel sensors, which can be rotated on a full circle, describing a cylindrical surface this way. During a rotation we take individual shots, line by line. All these line images define a panoramic image on a cylindrical surface. This camera architecture (in contrast to the plane segment of the pinhole camera) comes with new challenges, and this report is about a classification of different models of such cameras and their calibration. Acknowledgment. The authors acknowledge comments, collaboration or support by various students and colleagues at CITR Auckland and DLR Berlin-Adlershof. report1_HWK.tex; 22/03/2006; 9:47; p.1",
"title": ""
},
{
"docid": "023dd7b74feead464f2e643c70aef43e",
"text": "Technological advances are bringing connected and autonomous vehicles (CAVs) to the everevolving transportation system. Anticipating the public acceptance and adoption of these technologies is important. A recent internet-based survey was conducted polling 347 Austinites to understand their opinions on smart-car technologies and strategies. Ordered-probit and other model results indicate that respondents perceive fewer crashes to be the primary benefit of autonomous vehicles (AVs), with equipment failure being their top concern. Their average willingness to pay (WTP) for adding full (Level 4) automation ($7,253) appears to be much higher than that for adding partial (Level 3) automation ($3,300) to their current vehicles. This study estimates the impact of demographics, built-environment variables, and travel characteristics on Austinites’ WTP for adding such automations and connectivity to their current and coming vehicles. It also estimates adoption rates of shared autonomous vehicles (SAVs) under different pricing scenarios ($1, $2, and $3 per mile), choice dependence on friends’ and neighbors’ adoption rates, home-location decisions after AVs and SAVs become a common mode of transport, and preferences regarding how congestion-toll revenues are used. Higherincome, technology-savvy males, living in urban areas, and those who have experienced more crashes have a greater interest in and higher WTP for the new technologies, with less dependence on others’ adoption rates. Such behavioral models are useful to simulate long-term adoption of CAV technologies under different vehicle pricing and demographic scenarios. These results can be used to develop smarter transportation systems for more efficient and sustainable travel.",
"title": ""
},
{
"docid": "8dfeae1304eb97bc8f7d872af7aaa795",
"text": "Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the \"perfect single frame detector\". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterise both localisation and background-versusforeground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitised training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitised set of training and test annotations.",
"title": ""
},
{
"docid": "4afdb551efb88711ffe3564763c3806a",
"text": "This article applied GARCH model instead AR or ARMA model to compare with the standard BP and SVM in forecasting of the four international including two Asian stock markets indices.These models were evaluated on five performance metrics or criteria. Our experimental results showed the superiority of SVM and GARCH models, compared to the standard BP in forecasting of the four international stock markets indices.",
"title": ""
},
{
"docid": "b719b861a5bb6cc349ccbcd260f45054",
"text": "Road accident analysis is very challenging task and investigating the dependencies between the attributes become complex because of many environmental and road related factors. In this research work we applied data mining classification techniques to carry out gender based classification of which RndTree and C4.5 using AdaBoost Meta classifier gives high accurate results. The training dataset used for the research work is obtained from Fatality Analysis Reporting System (FARS) which is provided by the University of Alabama's Critical Analysis Reporting Environment (CARE) system. The results reveal that AdaBoost used with RndTree improvised the classifier's accuracy.",
"title": ""
},
{
"docid": "04b7d1197e9e5d78e948e0c30cbdfcfe",
"text": "Context: Software development depends significantly on team performance, as does any process that involves human interaction. Objective: Most current development methods argue that teams should self-manage. Our objective is thus to provide a better understanding of the nature of self-managing agile teams, and the teamwork challenges that arise when introducing such teams. Method: We conducted extensive fieldwork for 9 months in a software development company that introduced Scrum. We focused on the human sensemaking, on how mechanisms of teamwork were understood by the people involved. Results: We describe a project through Dickinson and McIntyre’s teamwork model, focusing on the interrelations between essential teamwork components. Problems with team orientation, team leadership and coordination in addition to highly specialized skills and corresponding division of work were important barriers for achieving team effectiveness. Conclusion: Transitioning from individual work to self-managing teams requires a reorientation not only by developers but also by management. This transition takes time and resources, but should not be neglected. In addition to Dickinson and McIntyre’s teamwork components, we found trust and shared mental models to be of fundamental importance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fc9699b4382b1ddc6f60fc6ec883a6d3",
"text": "Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components).\n Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.",
"title": ""
},
{
"docid": "44327eaaabf489d5deaf97a5bb041985",
"text": "Convolutional neural networks with deeply trained make a significant performance improvement in face detection. However, the major shortcomings, i.e. need of high computational cost and slow calculation, make the existing CNN-based face detectors impractical in many applications. In this paper, a real-time approach for face detection was proposed by utilizing a single end-to-end deep neural network with multi-scale feature maps, multi-scale prior aspect ratios as well as confidence rectification. Multi-scale feature maps overcome the difficulties of detecting small face, and meanwhile, multiscale prior aspect ratios reduce the computing cost and the confidence rectification, which is in line with the biological intuition and can further improve the detection rate. Evaluated on the public benchmark, FDDB, the proposed algorithm, gained a performance as good as the state-of-the-art CNNbased methods, however, with much faster speed.",
"title": ""
},
{
"docid": "853ac793e92b97d41e5ef6d1bc16d504",
"text": "We present a systematic study of parameters used in the construction of semantic vector space models. Evaluation is carried out on a variety of similarity tasks, including a compositionality dataset, using several source corpora. In addition to recommendations for optimal parameters, we present some novel findings, including a similarity metric that outperforms the alternatives on all tasks considered.",
"title": ""
},
{
"docid": "c5cfe386f6561eab1003d5572443612e",
"text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.",
"title": ""
},
{
"docid": "8e28f1561b3a362b2892d7afa8f2164c",
"text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.",
"title": ""
},
{
"docid": "9c92a9409cd3ce2b2c546f7ef156e1f3",
"text": "We describe a decorrelation network training method for improving the quality of regression learning in \\en-semble\" neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the \\3 Parity\" logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensemble networks having independently trained individual networks. This method is particularly applicable when there is insuucient data to train each individual network on disjoint subsets of training patterns.",
"title": ""
},
{
"docid": "c71b4a8d6d9ffc64c9e86aab40d9784f",
"text": "Voice impersonation is not the same as voice transformation, although the latter is an essential element of it. In voice impersonation, the resultant voice must convincingly convey the impression of having been naturally produced by the target speaker, mimicking not only the pitch and other perceivable signal qualities, but also the style of the target speaker. In this paper, we propose a novel neural-network based speech quality- and style-mimicry framework for the synthesis of impersonated voices. The framework is built upon a fast and accurate generative adversarial network model. Given spectrographic representations of source and target speakers' voices, the model learns to mimic the target speaker's voice quality and style, regardless of the linguistic content of either's voice, generating a synthetic spectrogram from which the time-domain signal is reconstructed using the Griffin-Lim method. In effect, this model reframes the well-known problem of style-transfer for images as the problem of style-transfer for speech signals, while intrinsically addressing the problem of durational variability of speech sounds. Experiments demonstrate that the model can generate extremely convincing samples of impersonated speech. It is even able to impersonate voices across different genders effectively. Results are qualitatively evaluated using standard procedures for evaluating synthesized voices.",
"title": ""
},
{
"docid": "8400fd3ffa3cdfd54e92370b8627c7e8",
"text": "A number of computer vision problems such as human age estimation, crowd density estimation and body/face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.",
"title": ""
}
] |
scidocsrr
|
a1ede71923b1a94dff46f1c8d67dfb20
|
Real-Time Bidding by Reinforcement Learning in Display Advertising
|
[
{
"docid": "d8982dd146a28c7d2779c781f7110ed5",
"text": "We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.",
"title": ""
},
{
"docid": "e9eefe7d683a8b02a8456cc5ff0ebe9d",
"text": "The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.",
"title": ""
}
] |
[
{
"docid": "548ca7ecd778bc64e4a3812acd73dcfb",
"text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.",
"title": ""
},
{
"docid": "759a4737f3774c1487670597f5e011d1",
"text": "Indoor positioning systems (IPS) based on Wi-Fi signals are gaining popularity recently. IPS based on Received Signal Strength Indicator (RSSI) could only achieve a precision of several meters due to the strong temporal and spatial variation of indoor environment. On the other hand, IPS based on Channel State Information (CSI) drive the precision into the sub-meter regime with several access points (AP). However, the performance degrades with fewer APs mainly due to the limit of bandwidth. In this paper, we propose a Wi-Fi-based time-reversal indoor positioning system (WiFi-TRIPS) using the location-specific fingerprints generated by CSIs with a total bandwidth of 1 GHz. WiFi-TRIPS consists of an offline phase and an online phase. In the offline phase, CSIs are collected in different 10 MHz bands from each location-of-interest and the timing and frequency synchronization errors are compensated. We perform a bandwidth concatenation to combine CSIs in different bands into a single fingerprint of 1 GHz. In the online phase, we evaluate the time-reversal resonating strength using the fingerprint from an unknown location and those in the database for location estimation. Extensive experiment results demonstrate a perfect 5cm precision in an 20cm × 70cm area in a non-line-of-sight office environment with one link measurement.",
"title": ""
},
{
"docid": "48544ec3225799c82732db7b3215833b",
"text": "Christian M Jones Laura Scholes Daniel Johnson Mary Katsikitis Michelle C. Carras University of the Sunshine Coast University of the Sunshine Coast Queensland University of Technology University of the Sunshine Coast Johns Hopkins University Queensland, Australia Queensland, Australia Queensland, Australia Queensland, Australia Baltimore, MD, USA cmjones@usc.edu.au l.scholes@usc.edu.au dm.johnson@qut.edu.au mkatsiki@usc.edu.au mcarras@jhsph.edu",
"title": ""
},
{
"docid": "65580dfc9bdf73ef72b6a133ab19ccdd",
"text": "A rotary piezoelectric motor design with simple structural components and the potential for miniaturization using a pretwisted beam stator is demonstrated in this paper. The beam acts as a vibration converter to transform axial vibration input from a piezoelectric element into combined axial-torsional vibration. The axial vibration of the stator modulates the torsional friction forces transmitted to the rotor. Prototype stators measuring 6.5 times 6.5 times 67.5 mm were constructed using aluminum (2024-T6) twisted beams with rectangular cross-section and multilayer piezoelectric actuators. The stall torque and no-load speed attained for a rectangular beam with an aspect ratio of 1.44 and pretwist helix angle of 17.7deg were 0.17 mNm and 840 rpm with inputs of 184.4 kHz and 149 mW, respectively. Operation in both clockwise and counterclockwise directions was obtained by choosing either 70.37 or 184.4 kHz for the operating frequency. The effects of rotor preload and power input on motor performance were investigated experimentally. The results suggest that motor efficiency is higher at low power input, and that efficiency increases with preload to a maximum beyond which it begins to drop.",
"title": ""
},
{
"docid": "610629d3891c10442fe5065e07d33736",
"text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.",
"title": ""
},
{
"docid": "b3a9ad04e7df1b2250f0a7b625509efd",
"text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.",
"title": ""
},
{
"docid": "1d5624ab9e2e69cd7a96619b25db3e1c",
"text": "Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 × 480 with no assumption on the minimum detectable face size.",
"title": ""
},
{
"docid": "3f1161fa81b19a15b0d4ff882b99b60a",
"text": "INTRODUCTION\nDupilumab is a fully human IgG4 monoclonal antibody directed against the α subunit of the interleukin (IL)-4 receptor (IL-4Rα). Since the activation of IL-4Rα is utilized by both IL-4 and IL-13 to mediate their pathophysiological effects, dupilumab behaves as a dual antagonist of these two sister cytokines, which blocks IL-4/IL-13-dependent signal transduction. Areas covered: Herein, the authors review the cellular and molecular pathways activated by IL-4 and IL-13, which are relevant to asthma pathobiology. They also review: the mechanism of action of dupilumab, the phase I, II and III studies evaluating the pharmacokinetics as well as the safety, tolerability and clinical efficacy of dupilumab in asthma therapy. Expert opinion: Supported by a strategic mechanism of action, as well as by convincing preliminary clinical results, dupilumab currently appears to be a very promising biological drug for the treatment of severe uncontrolled asthma. It also may have benefits to comorbidities of asthma including atopic dermatitis, chronic sinusitis and nasal polyposis.",
"title": ""
},
{
"docid": "254f437f82e14d889fe6ba15df8369ad",
"text": "In academia, scientific research achievements would be inconceivable without academic collaboration and cooperation among researchers. Previous studies have discovered that productive scholars tend to be more collaborative. However, it is often difficult and time-consuming for researchers to find the most valuable collaborators (MVCs) from a large volume of big scholarly data. In this paper, we present MVCWalker, an innovative method that stands on the shoulders of random walk with restart (RWR) for recommending collaborators to scholars. Three academic factors, i.e., coauthor order, latest collaboration time, and times of collaboration, are exploited to define link importance in academic social networks for the sake of recommendation quality. We conducted extensive experiments on DBLP data set in order to compare MVCWalker to the basic model of RWR and the common neighbor-based model friend of friends in various aspects, including, e.g., the impact of critical parameters and academic factors. Our experimental results show that incorporating the above factors into random walk model can improve the precision, recall rate, and coverage rate of academic collaboration recommendations.",
"title": ""
},
{
"docid": "69ced55a44876f7cc4e57f597fcd5654",
"text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.",
"title": ""
},
{
"docid": "64f4ee1e5397b1a5dd35f7908ead0429",
"text": "Online user feedback is principally used as an information source for evaluating customers’ satisfaction for a given goods, service or software application. The increasing attitude of people towards sharing comments through the social media is making online user feedback a resource containing different types of valuable information. The huge amount of available user feedback has drawn the attention of researchers from different fields. For instance, data mining techniques have been developed to enable information extraction for different purposes, or the use of social techniques for involving users in the innovation of services and processes. Specifically, current research and technological efforts are put into the definition of platforms to gather and/or analyze multi-modal feedback. But we believe that the understanding of the type of concepts instantiated as information contained in user feedback would be beneficial to define new methods for its better exploitation. In our research, we focus on online explicit user feedback that can be considered as a powerful means for user-driven evolution of software services and applications. Up to our knowledge, a conceptualization of user feedback is still missing. With the purpose of contributing to fill up this gap we propose an ontology, for explicit online user feedback that is founded on a foundational ontology and has been proposed to describe artifacts and processes in software engineering. Our contribution in this paper concerns a novel user feedback ontology founded on a Unified Foundational Ontology (UFO) that supports the description of analysis processes of user feedback in software engineering. We describe the ontology together with an evaluation of its quality, and discuss some application scenarios.",
"title": ""
},
{
"docid": "5940949b1fd6f6b8ab2c45dcb1ece016",
"text": "Despite significant work on the problem of inferring a Twitter user’s gender from her online content, no systematic investigation has been made into leveraging the most obvious signal of a user’s gender: first name. In this paper, we perform a thorough investigation of the link between gender and first name in English tweets. Our work makes several important contributions. The first and most central contribution is two different strategies for incorporating the user’s self-reported name into a gender classifier. We find that this yields a 20% increase in accuracy over a standard baseline classifier. These classifiers are the most accurate gender inference methods for Twitter data developed to date. In order to evaluate our classifiers, we developed a novel way of obtaining gender-labels for Twitter users that does not require analysis of the user’s profile or textual content. This is our second contribution. Our approach eliminates the troubling issue of a label being somehow derived from the same text that a classifier will use to",
"title": ""
},
{
"docid": "27034289da290734ec5136656573ca11",
"text": "Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "ddb2fb53f0ead327d064d9b34af9b335",
"text": "We seek to automate the design of molecules based on specific chemical properties. In computational terms, this task involves continuous embedding and generation of molecular graphs. Our primary contribution is the direct realization of molecular graphs, a task previously approached by generating linear SMILES strings instead of graphs. Our junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network. This approach allows us to incrementally expand molecules while maintaining chemical validity at every step. We evaluate our model on multiple tasks ranging from molecular generation to optimization. Across these tasks, our model outperforms previous state-of-the-art baselines by a significant margin.",
"title": ""
},
{
"docid": "87c33e325d074d8baefd56f6396f1c7a",
"text": "We present a recurrent model for semantic instance segmentation that sequentially generates binary masks and their associated class probabilities for every object in an image. Our proposed system is trainable end-to-end from an input image to a sequence of labeled masks and, compared to methods relying on object proposals, does not require postprocessing steps on its output. We study the suitability of our recurrent model on three different instance segmentation benchmarks, namely Pascal VOC 2012, CVPPP Plant Leaf Segmentation and Cityscapes. Further, we analyze the object sorting patterns generated by our model and observe that it learns to follow a consistent pattern, which correlates with the activations learned in the encoder part of our network.",
"title": ""
},
{
"docid": "1c576cf604526b448f0264f2c39f705a",
"text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.",
"title": ""
},
{
"docid": "c474df285da8106b211dc7fe62733423",
"text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.",
"title": ""
},
{
"docid": "9d175a211ec3b0ee7db667d39c240e1c",
"text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.",
"title": ""
},
{
"docid": "d464711e6e07b61896ba6efe2bbfa5e4",
"text": "This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.",
"title": ""
}
] |
scidocsrr
|
ae6eae748436bd9099d1b047c04e39c4
|
EDGE DETECTION TECHNIQUES FOR IMAGE SEGMENTATION
|
[
{
"docid": "68990d2cb2ed45e1c8d30b2d7cb45926",
"text": "Methods for histogram thresholding based on the minimization of a threshold-dependent criterion function might not work well for images having multimodal histograms. We propose an approach to threshold the histogram according to the similarity between gray levels. Such a similarity is assessed through a fuzzy measure. In this way, we overcome the local minima that affect most of the conventional methods. The experimental results demonstrate the effectiveness of the proposed approach for both bimodal and multimodal histograms.",
"title": ""
},
{
"docid": "e14234696124c47d1860301c873f6685",
"text": "We propose a novel image segmentation technique using the robust, adaptive least k-th order squares (ALKS) estimator which minimizes the k-th order statistics of the squared of residuals. The optimal value of k is determined from the data and the procedure detects the homogeneous surface patch representing the relative majority of the pixels. The ALKS shows a better tolerance to structured outliers than other recently proposed similar techniques: Minimize the Probability of Randomness (MINPRAN) and Residual Consensus (RESC). The performance of the new, fully autonomous, range image segmentation algorithm is compared to several other methods. Index Terms|robust methods, range image segmentation, surface tting",
"title": ""
},
{
"docid": "6a96e3680d3d25fc8bcffe3b7e70968f",
"text": "All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher shall not be liable in any event for incidental or consequential damages with, or arising out of, the furnishing, performance, or use of these programs. 1 1 Introduction Preview Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. In this chapter we outline how a theoretical base and state-of-the-art software can be integrated into a prototyping environment whose objective is to provide a set of well-supported tools for the solution of a broad class of problems in digital image processing. Background An important characteristic underlying the design of image processing systems is the significant level of testing and experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches and quickly prototype candidate solutions generally plays a major role in reducing the cost and time required to arrive at a viable system implementation. Little has been written in the way of instructional material to bridge the gap between theory and application in a well-supported software environment. The main objective of this book is to integrate under one cover a broad base of theoretical concepts with the knowledge required to implement those concepts using state-of-the-art image processing software tools. The theoretical underpinnings of the material in the following chapters are mainly from the leading textbook in the field: Digital Image Processing, by Gonzalez and Woods, published by Prentice Hall. The software code and supporting tools are based on the leading software package in the field: The MATLAB Image Processing Toolbox, † 1.1 † In the following discussion and in subsequent chapters we sometimes refer to Digital Image Processing by Gonzalez and Woods as \" the Gonzalez-Woods book, \" and to the Image Processing Toolbox as \" IPT \" or simply as the \" toolbox. \" 2 Chapter 1 I Introduction from The MathWorks, Inc. (see Section 1.3). The material in the present book shares the same design, notation, and style of presentation …",
"title": ""
}
] |
[
{
"docid": "6bb1914cbbaf0ba27a8ab52dbec2152a",
"text": "This paper presents a novel local feature for 3D range image data called `the line image'. It is designed to be highly viewpoint invariant by exploiting the range image to efficiently detect 3D occupancy, producing a representation of the surface, occlusions and empty spaces. We also propose a strategy for defining keypoints with stable orientations which define regions of interest in the scan for feature computation. The feature is applied to the task of object classification on sparse urban data taken with a Velodyne laser scanner, producing good results.",
"title": ""
},
{
"docid": "7be0d43664c4ebb3c66f58c485a517ce",
"text": "We consider problems requiring to allocate a set of rectangular items to larger rectangular standardized units by minimizing the waste. In two-dimensional bin packing problems these units are finite rectangles, and the objective is to pack all the items into the minimum number of units, while in two-dimensional strip packing problems there is a single standardized unit of given width, and the objective is to pack all the items within the minimum height. We discuss mathematical models, and survey lower bounds, classical approximation algorithms, recent heuristic and metaheuristic methods and exact enumerative approaches. The relevant special cases where the items have to be packed into rows forming levels are also discussed in detail. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "ed41127bf43b4f792f8cbe1ec652f7b2",
"text": "Today, more than 100 blockchain projects created to transform government systems are being conducted in more than 30 countries. What leads countries rapidly initiate blockchain projects? I argue that it is because blockchain is a technology directly related to social organization; Unlike other technologies, a consensus mechanism form the core of blockchain. Traditionally, consensus is not the domain of machines but rather humankind. However, blockchain operates through a consensus algorithm with human intervention; once that consensus is made, it cannot be modified or forged. Through utilization of Lawrence Lessig’s proposition that “Code is law,” I suggest that blockchain creates “absolute law” that cannot be violated. This characteristic of blockchain makes it possible to implement social technology that can replace existing social apparatuses including bureaucracy. In addition, there are three close similarities between blockchain and bureaucracy. First, both of them are defined by the rules and execute predetermined rules. Second, both of them work as information processing machines for society. Third, both of them work as trust machines for society. Therefore, I posit that it is possible and moreover unavoidable to replace bureaucracy with blockchain systems. In conclusion, I suggest five principles that should be adhered to when we replace bureaucracy with the blockchain system: 1) introducing Blockchain Statute law; 2) transparent disclosure of data and source code; 3) implementing autonomous executing administration; 4) building a governance system based on direct democracy and 5) making Distributed Autonomous Government(DAG).",
"title": ""
},
{
"docid": "7e8976250bd67e07fb71c6dd8b5be414",
"text": "With the rapid growth of product review forums, discussion groups, and Blogs, it is almost impossible for a customer to make an informed purchase decision. Different and possibly contradictory opinions written by different reviewers can even make customers more confused. In the last few years, mining customer reviews (opinion mining) has emerged as an interesting new research direction to address this need. One of the interesting problem in opinion mining is Opinion Question Answering (Opinion QA). While traditional QA can only answer factual questions, opinion QA aims to find the authors' sentimental opinions on a specific target. Current opinion QA systems suffers from several weaknesses. The main cause of these weaknesses is that these methods can only answer a question if they find a content similar to the given question in the given documents. As a result, they cannot answer majority questions like \"What is the best digital camera?\" nor comparative questions, e.g. \"Does SamsungY work better than CanonX?\". In this paper we address the problem of opinion question answering to answer opinion questions about products by using reviewers' opinions. Our proposed method, called Aspect-based Opinion Question Answering (AQA), support answering of opinion-based questions while improving the weaknesses of current techniques. AQA contains five phases: question analysis, question expansion, high quality review retrieval, subjective sentence extraction, and answer grouping. AQA adopts an opinion mining technique in the preprocessing phase to identify target aspects and estimate their quality. Target aspects are attributes or components of the target product that have been commented on in the review, e.g. 'zoom' and 'battery life' for a digital camera. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the AQA in terms of the accuracy of the retrieved answers.",
"title": ""
},
{
"docid": "85908a576c13755e792d52d02947f8b3",
"text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.",
"title": ""
},
{
"docid": "18b3328725661770be1f408f37c7eb64",
"text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.",
"title": ""
},
{
"docid": "511c4a62c32b32eb74761b0585564fe4",
"text": "In the previous chapters, we proposed several features for writer identification, historical manuscript dating and localization separately. In this chapter, we present a summarization of the proposed features for different applications by proposing a joint feature distribution (JFD) principle to design novel discriminative features which could be the joint distribution of features on adjacent positions or the joint distribution of different features on the same location. Following the proposed JFD principle, we introduce seventeen features, including twelve textural-based and five grapheme-based features. We evaluate these features for different applications from four different perspectives to understand handwritten documents beyond OCR, by writer identification, script recognition, historical manuscript dating and localization.",
"title": ""
},
{
"docid": "bd125a32cba00b4071c87aa42e7f3236",
"text": "With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (eg. Kinect) still comes with several challenges that result in noise or even incomplete shapes. Recent success in deep learning has shown how to learn complex shape distributions in a data-driven way from large scale 3D CAD Model collections and to utilize them for 3D processing on volumetric representations and thereby circumventing problems of topology and tessellation. Prior work has shown encouraging results on problems ranging from shape completion to recognition. We provide an analysis of such approaches and discover that training as well as the resulting representation are strongly and unnecessarily tied to the notion of object labels. Furthermore, deep learning research argues [1] that learning representation with over-complete model are more prone to overfitting compared to the approach that learns from noisy data. Thus, we investigate a full convolutional volumetric denoising auto encoder that is trained in a unsupervised fashion. It outperforms prior work on recognition as well as more challenging tasks like denoising and shape completion. In addition, our approach is atleast two order of magnitude faster at test time and thus, provides a path to scaling up 3D deep learning.",
"title": ""
},
{
"docid": "a6a2c027b809a98430ad80b837fa8090",
"text": "This paper presents a 60-GHz CMOS direct-conversion Doppler radar RF sensor with a clutter canceller for single-antenna noncontact human vital-signs detection. A high isolation quasi-circulator (QC) is designed to reduce the transmitting (Tx) power leakage (to the receiver). The clutter canceller performs cancellation for the Tx leakage power (from the QC) and the stationary background reflection clutter to enhance the detection sensitivity of weak vital signals. The integration of the 60-GHz RF sensor consists of the voltage-controlled oscillator, divided-by-2 frequency divider, power amplifier, QC, clutter canceller (consisting of variable-gain amplifier and 360 ° phase shifter), low-noise amplifier, in-phase/quadrature-phase sub-harmonic mixer, and three couplers. In the human vital-signs detection experimental measurement, at a distance of 75 cm, the detected heartbeat (1-1.3 Hz) and respiratory (0.35-0.45 Hz) signals can be clearly observed with a 60-GHz 17-dBi patch-array antenna. The RF sensor is fabricated in 90-nm CMOS technology with a chip size of 2 mm×2 mm and a consuming power of 217 mW.",
"title": ""
},
{
"docid": "78ccfdac121daaae3abe3f8f7c73482b",
"text": "We present a method for constructing smooth n-direction fields (line fields, cross fields, etc.) on surfaces that is an order of magnitude faster than state-of-the-art methods, while still producing fields of equal or better quality. Fields produced by the method are globally optimal in the sense that they minimize a simple, well-defined quadratic smoothness energy over all possible configurations of singularities (number, location, and index). The method is fully automatic and can optionally produce fields aligned with a given guidance field such as principal curvature directions. Computationally the smoothest field is found via a sparse eigenvalue problem involving a matrix similar to the cotan-Laplacian. When a guidance field is present, finding the optimal field amounts to solving a single linear system.",
"title": ""
},
{
"docid": "e3739a934ecd7b99f2d35a19f2aed5cf",
"text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.",
"title": ""
},
{
"docid": "bbb4f7b90ade0ffbf7ba3e598c18a78f",
"text": "In this paper, an analysis of the resistance of multi-track coils in printed circuit board (PCB) implementations, where the conductors have rectangular cross-section, for spiral planar coils is carried out. For this purpose, different analytical losses models for the mentioned conductors have been reviewed. From this review, we conclude that for the range of frequencies, the coil dimensions and the planar configuration typically used in domestic induction heating, the application in which we focus, these analysis are unsatisfactory. Therefore, in this work the resistance of multi-track winding has been calculated by means of finite element analysis (FEA) tool. These simulations provide us some design guidelines that allow us to optimize the design of multi-track coils for domestic induction heating. Furthermore, several prototypes are used to verify the simulated results, both single-turn coils and multi-turn coils.",
"title": ""
},
{
"docid": "96bd149346554dac9e3889f0b1569be7",
"text": "BACKGROUND\nFlight related low back pain (LBP) among helicopter pilots is frequent and may influence flight performance. Prolonged confined sitting during flights seems to weaken lumbar trunk (LT) muscles with associated secondary transient pain. Aim of the study was to investigate if structured training could improve muscular function and thus improve LBP related to flying.\n\n\nMETHODS\n39 helicopter pilots (35 men and 4 women), who reported flying related LBP on at least 1 of 3 missions last month, were allocated to two training programs over a 3-month period. Program A consisted of 10 exercises recommended for general LBP. Program B consisted of 4 exercises designed specifically to improve LT muscular endurance. The pilots were examined before and after the training using questionnaires for pain, function, quality of health and tests of LT muscular endurance as well as ultrasound measurements of the contractility of the lumbar multifidus muscle (LMM).\n\n\nRESULTS\nApproximately half of the participants performed the training per-protocol. Participants in this subset group had comparable baseline characteristics as the total study sample. Pre and post analysis of all pilots included, showed participants had marked improvement in endurance and contractility of the LMM following training. Similarly, participants had improvement in function and quality of health. Participants in program B had significant improvement in pain, function and quality of health.\n\n\nCONCLUSIONS\nThis study indicates that participants who performed a three months exercise program had improved muscle endurance at the end of the program. The helicopter pilots also experienced improved function and quality of health.\n\n\nTRIAL REGISTRATION\nIdentifier: NCT01788111 Registration date; February 5th, 2013, verified April 2016.",
"title": ""
},
{
"docid": "31dbedbcdb930ead1f8274ff2c181fcb",
"text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.",
"title": ""
},
{
"docid": "0048b244bd55a724f9bcf4dbf5e551a8",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "f83d8a69a4078baf4048b207324e505f",
"text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.",
"title": ""
},
{
"docid": "b16407fc67058110b334b047bcfea9ac",
"text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
},
{
"docid": "41cfa1840ef8b6f35865b220c087302b",
"text": "Ultra-high voltage (>10 kV) power devices based on SiC are gaining significant attentions since Si power devices are typically at lower voltage levels. In this paper, a world record 22kV Silicon Carbide (SiC) p-type ETO thyristor is developed and reported as a promising candidate for ultra-high voltage applications. The device is based on a 2cm2 22kV p type gate turn off thyristor (p-GTO) structure. Its static as well as dynamic performances are analyzed, including the anode to cathode blocking characteristics, forward conduction characteristics at different temperatures, turn-on and turn-off dynamic performances. The turn-off energy at 6kV, 7kV and 8kV respectively is also presented. In addition, theoretical boundary of the reverse biased safe operation area (RBSOA) of the 22kV SiC ETO is obtained by simulations and the experimental test also demonstrated a wide RBSOA.",
"title": ""
},
{
"docid": "945bf7690169b5f2e615324fb133bc19",
"text": "Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.",
"title": ""
}
] |
scidocsrr
|
0edabeebbf0365b18eeacd6d81e02853
|
A Stress Sensor Based on Galvanic Skin Response (GSR) Controlled by ZigBee
|
[
{
"docid": "1d51506f851a8b125edd7edcd8c6bd1b",
"text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.",
"title": ""
},
{
"docid": "963eb2a6225a1f320489a504f8010e94",
"text": "A method for recognizing the emotion states of subjects based on 30 features extracted from their Galvanic Skin Response (GSR) signals was proposed. GSR signals were acquired by means of experiments attended by those subjects. Next the data was normalized with the calm signal of the same subject after being de-noised. Then the normalized data were extracted features before the step of feature selection. Immune Hybrid Particle Swarm Optimization (IH-PSO) was proposed to select the feature subsets of different emotions. Classifier for feature selection was evaluated on the correct recognition as well as number of the selected features. At last, this paper verified the effectiveness of the feature subsets selected with another new data. All performed in this paper illustrate that IH-PSO can achieve much effective results, and further more, demonstrate that there is significant emotion information in GSR signal.",
"title": ""
}
] |
[
{
"docid": "3b07476ebb8b1d22949ec32fc42d2d05",
"text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.",
"title": ""
},
{
"docid": "dbfb89ae6abef4d3dd9fa7591f0c57b1",
"text": "While everyday document search is done by keyword-based queries to search engines, we have situations that need deep search of documents such as scrutinies of patents, legal documents, and so on. In such cases, using document queries, instead of keyword-based queries, can be more helpful because it exploits more information from the query document. This paper studies a scheme of document search based on document queries. In particular, it uses centrality vectors, instead of tf-idf vectors, to represent query documents, combined with the Word2vec method to capture the semantic similarity in contained words. This scheme improves the performance of document search and provides a way to find documents not only lexically, but semantically close to a query document.",
"title": ""
},
{
"docid": "01ea2d3c28382459aafa064e70e582d3",
"text": "* In recent decades, an intriguing view of human cognition has garnered increasing support. According to this view, which I will call 'the hypothesis of extended cognition' ('HEC', hereafter), human cognitive processing literally extends into the environment surrounding the organism, and human cognitive states literally comprise—as wholes do their proper parts— elements in that environment; in consequence, while the skin and scalp may encase the human organism, they do not delimit the thinking subject. 1 The hypothesis of extended cognition should provoke our critical interest. Acceptance of HEC would alter our approach to research and theorizing in cognitive science and, it would seem, significantly change our conception of persons. Thus, if HEC faces substantive difficulties, these should be brought to light; this paper is meant to do just that, exposing some of the problems HEC must overcome if it is to stand among leading views of the nature of human cognition. The essay unfolds as follows: The first section consists of preliminary remarks, mostly about the scope and content of HEC as I will construe it. Sections II and III clarify HEC by situating it with respect to related theses one finds in the literature—the hypothesis of embedded cognition Association. I would like to express my appreciation to members of all three audiences for their useful feedback (especially William Lycan at the Mountain-Plains and David Chalmers at the APA), as well as to my conference commentators, Robert Welshon and Tadeusz Zawidzki. I also benefited from discussing extended cognition with 2 and content-externalism. The remaining sections develop a series of objections to HEC and the arguments that have been offered in its support. The first objection appeals to common sense: HEC implies highly counterintuitive attributions of belief. Of course, HEC-theorists can take, and have taken, a naturalistic stand. They claim that HEC need not be responsive to commonsense objections, for HEC is being offered as a theoretical postulate of cognitive science; whether we should accept HEC depends, they say, on the value of the empirical work premised upon it. Thus, I consider a series of arguments meant to show that HEC is a promising causal-explanatory hypothesis, concluding that these arguments fail and that, ultimately, HEC appears to be of marginal interest as part of a philosophical foundation for cognitive science. If the cases canvassed here are any indication, adopting HEC results in a significant loss of explanatory power or, at the …",
"title": ""
},
{
"docid": "a4b57037235e306034211e07e8500399",
"text": "As wireless devices boom and bandwidth-hungry applications (e.g., video and cloud uploading) get popular, today's wireless local area networks (WLANs) become not only crowded but also stressed at throughput. Multiuser multiple-input-multiple-output (MU-MIMO), an advanced form of MIMO, has gained attention due to its huge potential in improving the performance of WLANs. This paper surveys random access-based medium access control (MAC) protocols for MU-MIMO-enabled WLANs. It first provides background information about the evolution and the fundamental MAC schemes of IEEE 802.11 Standards and Amendments, and then identifies the key requirements of designing MU-MIMO MAC protocols for WLANs. After this, the most representative MU-MIMO MAC proposals in the literature are overviewed by benchmarking their MAC procedures and examining the key components, such as the channel state information acquisition, decoding/precoding, and scheduling schemes. Classifications and discussions on important findings of the surveyed MAC protocols are provided, based on which, the research challenges for designing effective MU-MIMO MAC protocols, as well as the envisaged MAC's role in the future heterogeneous networks, are highlighted.",
"title": ""
},
{
"docid": "16db60e96604f65f8b6f4f70e79b8ae5",
"text": "Yahoo! Answers is currently one of the most popular question answering systems. We claim however that its user experience could be significantly improved if it could route the \"right question\" to the \"right user.\" Indeed, while some users would rush answering a question such as \"what should I wear at the prom?,\" others would be upset simply being exposed to it. We argue here that Community Question Answering sites in general and Yahoo! Answers in particular, need a mechanism that would expose users to questions they can relate to and possibly answer.\n We propose here to address this need via a multi-channel recommender system technology for associating questions with potential answerers on Yahoo! Answers. One novel aspect of our approach is exploiting a wide variety of content and social signals users regularly provide to the system and organizing them into channels. Content signals relate mostly to the text and categories of questions and associated answers, while social signals capture the various user interactions with questions, such as asking, answering, voting, etc. We fuse and generalize known recommendation approaches within a single symmetric framework, which incorporates and properly balances multiple types of signals according to channels. Tested on a large scale dataset, our model exhibits good performance, clearly outperforming standard baselines.",
"title": ""
},
{
"docid": "6a9e30fd08b568ef6607158cab4f82b2",
"text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.",
"title": ""
},
{
"docid": "1cde5c2c4e4fe5d791242da86d4dd06d",
"text": "Recent years have seen an increasing interest in micro aerial vehicles (MAVs) and flapping flight in connection to that. The Delft University of Technology has developed a flapping wing MAV, “DelFly II”, which relies on a flapping bi-plane wing configuration for thrust and lift. The ultimate aim of the present research is to improve the flight performance of the DelFly II from both an aerodynamic and constructional perspective. This is pursued by a parametric wing geometry study in combination with a detailed aerodynamic and aeroelastic investigation. In the geometry study an improved wing geometry was found, where stiffeners are placed more outboard for a more rigid in-flight wing shape. The improved wing shows a 10% increase in the thrust-to-power ratio. Investigations into the swirling strength around the DelFly wing in hovering flight show a leading edge vortex (LEV) during the inand out-stroke. The LEV appears to be less stable than in insect flight, since some shedding of LEV is present. Nomenclature Symbol Description Unit f Wing flapping frequency Hz P Power W R DelFly wing length (semi-span) mm T Thrust N λci Positive imaginary part of eigenvalue τ Dimensionless time Abbreviations LEV Leading Edge Vortex MAV Micro Aerial Vehicle UAV Unmanned Aerial Vehicle",
"title": ""
},
{
"docid": "358adb9e7fb3507d8cfe8af85e028686",
"text": "An under-recognized inflammatory dermatosis characterized by an evolution of distinctive clinicopathological features\" (2016).",
"title": ""
},
{
"docid": "968ea2dcfd30492a81a71be25f16e350",
"text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.",
"title": ""
},
{
"docid": "4c563b09a10ce0b444edb645ce411d42",
"text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic",
"title": ""
},
{
"docid": "1a819d090746e83676b0fc3ee94fd526",
"text": "Brain-computer interfaces (BCIs) use signals recorded from the brain to operate robotic or prosthetic devices. Both invasive and noninvasive approaches have proven effective. Achieving the speed, accuracy, and reliability necessary for real-world applications remains the major challenge for BCI-based robotic control.",
"title": ""
},
{
"docid": "b05fc1f939ff50dc07dbbc170cd28478",
"text": "A compact multiresonant antenna for octaband LTE/WWAN operation in the internal smartphone applications is proposed and discussed in this letter. With a small volume of 15×25×4 mm3, the presented antenna comprises two direct feeding strips and a chip-inductor-loaded two-branch shorted strip. The two direct feeding strips can provide two resonant modes at around 1750 and 2650 MHz, and the two-branch shorted strip can generate a double-resonance mode at about 725 and 812 MHz. Moreover, a three-element bandstop matching circuit is designed to generate an additional resonance for bandwidth enhancement of the lower band. Ultimately, up to five resonances are achieved to cover the desired 704-960- and 1710-2690-MHz bands. Simulated and measured results are presented to demonstrate the validity of the proposed antenna.",
"title": ""
},
{
"docid": "c497964a942cc4187ab5dd8c8ea1c6d4",
"text": "De novo sequencing is an important task in proteomics to identify novel peptide sequences. Traditionally, only one MS/MS spectrum is used for the sequencing of a peptide; however, the use of multiple spectra of the same peptide with different types of fragmentation has the potential to significantly increase the accuracy and practicality of de novo sequencing. Research into the use of multiple spectra is in a nascent stage. We propose a general framework to combine the two different types of MS/MS data. Experiments demonstrate that our method significantly improves the de novo sequencing of existing software.",
"title": ""
},
{
"docid": "f6826b5983bc4af466e42e149ac19ba8",
"text": "Automatic violence detection from video is a hot topic for many video surveillance applications. However, there has been little success in developing an algorithm that can detect violence in surveillance videos with high performance. In this paper, following our recently proposed idea of motion Weber local descriptor (WLD), we make two major improvements and propose a more effective and efficient algorithm for detecting violence from motion images. First, we propose an improved WLD (IWLD) to better depict low-level image appearance information, and then extend the spatial descriptor IWLD by adding a temporal component to capture local motion information and hence form the motion IWLD. Second, we propose a modified sparse-representation-based classification model to both control the reconstruction error of coding coefficients and minimize the classification error. Based on the proposed sparse model, a class-specific dictionary containing dictionary atoms corresponding to the class labels is learned using class labels of training samples. With this learned dictionary, not only the representation residual but also the representation coefficients become discriminative. A classification scheme integrating the modified sparse model is developed to exploit such discriminative information. The experimental results on three benchmark data sets have demonstrated the superior performance of the proposed approach over the state of the arts.",
"title": ""
},
{
"docid": "b850d522f3283e638a5995242ebe2b08",
"text": "Agile methods may produce software faster but we also need to know how they meet our quality requirements. In this paper we compare the waterfall model with agile processes to show how agile methods achieve software quality under time pressure and in an unstable requirements environment, i.e. we analyze agile software quality assurance. We present a detailed waterfall model showing its software quality support processes. We then show the quality practices that agile methods have integrated into their processes. This allows us to answer the question \"can agile methods ensure quality even though they develop software faster and can handle unstable requirements?\".",
"title": ""
},
{
"docid": "b23230f0386f185b7d5eb191034d58ec",
"text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.",
"title": ""
},
{
"docid": "b91204ac8a118fcde9a774e925f24a7e",
"text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.",
"title": ""
},
{
"docid": "95d8b83eadde6d6da202341c0b9238c8",
"text": "Numerous studies have demonstrated that water-based compost preparations, referred to as compost tea and compost-water extract, can suppress phytopathogens and plant diseases. Despite its potential, compost tea has generally been considered as inadequate for use as a biocontrol agent in conventional cropping systems but important to organic producers who have limited disease control options. The major impediments to the use of compost tea have been the lessthan-desirable and inconsistent levels of plant disease suppression as influenced by compost tea production and application factors including compost source and maturity, brewing time and aeration, dilution and application rate and application frequency. Although the mechanisms involved in disease suppression are not fully understood, sterilization of compost tea has generally resulted in a loss in disease suppressiveness. This indicates that the mechanisms of suppression are often, or predominantly, biological, although physico-chemical factors have also been implicated. Increasing the use of molecular approaches, such as metagenomics, metaproteomics, metatranscriptomics and metaproteogenomics should prove useful in better understanding the relationships between microbial abundance, diversity, functions and disease suppressive efficacy of compost tea. Such investigations are crucial in developing protocols for optimizing the compost tea production process so as to maximize disease suppressive effect without exposing the manufacturer or user to the risk of human pathogens. To this end, it is recommended that compost tea be used as part of an integrated disease management system.",
"title": ""
},
{
"docid": "72a86b52797d61bf631d75cd7109e9d9",
"text": "We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus’ open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.",
"title": ""
},
{
"docid": "3b302ce4b5b8b42a61c7c4c25c0f3cbf",
"text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.",
"title": ""
}
] |
scidocsrr
|
6f75cbc55edf5728ea099300c7dedca0
|
Summarization of Egocentric Videos: A Comprehensive Survey
|
[
{
"docid": "0ff159433ed8958109ba8006822a2d67",
"text": "In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text summaries written by humans. We show that our technique has higher agreement with human judgment than pixel-based distance metrics. We also release text annotations and ground-truth text summaries for a number of publicly available video datasets, for use by the computer vision community.",
"title": ""
},
{
"docid": "c2f1750b668ec7acdd53249773081927",
"text": "Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.",
"title": ""
}
] |
[
{
"docid": "79a16052e5e6a44ca6f9fef8ebac3c2d",
"text": "Plants are among the earth's most useful and beautiful products of nature. Plants have been crucial to mankind's survival. The urgent need is that many plants are at the risk of extinction. About 50% of ayurvedic medicines are prepared using plant leaves and many of these plant species belong to the endanger group. So it is indispensable to set up a database for plant protection. We believe that the first step is to teach a computer how to classify plants. Leaf /plant identification has been a challenge for many researchers. Several researchers have proposed various techniques. In this paper we have proposed a novel framework for recognizing and identifying plants using shape, vein, color, texture features which are combined with Zernike movements. Radial basis probabilistic neural network (RBPNN) has been used as a classifier. To train RBPNN we use a dual stage training algorithm which significantly enhances the performance of the classifier. Simulation results on the Flavia leaf dataset indicates that the proposed method for leaf recognition yields an accuracy rate of 93.82%",
"title": ""
},
{
"docid": "ef62b0e14f835a36c3157c1ae0f858e5",
"text": "Algorithms based on Convolutional Neural Network (CNN) have recently been applied to object detection applications, greatly improving their performance. However, many devices intended for these algorithms have limited computation resources and strict power consumption constraints, and are not suitable for algorithms designed for GPU workstations. This paper presents a novel method to optimise CNN-based object detection algorithms targeting embedded FPGA platforms. Given parameterised CNN hardware modules, an optimisation flow takes network architectures and resource constraints as input, and tunes hardware parameters with algorithm-specific information to explore the design space and achieve high performance. The evaluation shows that our design model accuracy is above 85% and, with optimised configuration, our design can achieve 49.6 times speed-up compared with software implementation.",
"title": ""
},
{
"docid": "b70032a5ca8382ac6853535b499f4937",
"text": "Centroid and spread are commonly used approaches in ranking fuzzy numbers. Some experts rank fuzzy numbers using centroid or spread alone while others tend to integrate them together. Although a lot of methods for ranking fuzzy numbers that are related to both approaches have been presented, there are still limitations whereby the ranking obtained is inconsistent with human intuition. This paper proposes a novel method for ranking fuzzy numbers that integrates the centroid point and the spread approaches and overcomes the limitations and weaknesses of most existing methods. Proves and justifications with regard to the proposed ranking method are also presented. 5",
"title": ""
},
{
"docid": "f8082d18f73bee4938ab81633ff02391",
"text": "Against the background of Moreno’s “cognitive-affective theory of learning with media” (CATLM) (Moreno, 2006), three papers on cognitive and affective processes in learning with multimedia are discussed in this commentary. The papers provide valuable insights in how cognitive processing and learning results can be affected by constructs such as “situational interest”, “positive emotions”, or “confusion”, and they suggest questions for further research in this field. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f45b7caf3c599a6de835330c39599570",
"text": "Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.",
"title": ""
},
{
"docid": "ff71838a3f8f44e30dc69ed2f9371bfc",
"text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.",
"title": ""
},
{
"docid": "5701585d5692b4b28da3132f4094fc9f",
"text": "We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.",
"title": ""
},
{
"docid": "d8f6f4bef57e26e9d2dc3684ea07a2f4",
"text": "Alzheimer's disease is a progressive neurodegenerative disease that typically manifests clinically as an isolated amnestic deficit that progresses to a characteristic dementia syndrome. Advances in neuroimaging research have enabled mapping of diverse molecular, functional, and structural aspects of Alzheimer's disease pathology in ever increasing temporal and regional detail. Accumulating evidence suggests that distinct types of imaging abnormalities related to Alzheimer's disease follow a consistent trajectory during pathogenesis of the disease, and that the first changes can be detected years before the disease manifests clinically. These findings have fuelled clinical interest in the use of specific imaging markers for Alzheimer's disease to predict future development of dementia in patients who are at risk. The potential clinical usefulness of single or multimodal imaging markers is being investigated in selected patient samples from clinical expert centres, but additional research is needed before these promising imaging markers can be successfully translated from research into clinical practice in routine care.",
"title": ""
},
{
"docid": "f11dc9f1978544823aeb61114d4f927f",
"text": "This paper presents a passive radar system using GSM as illuminator of opportunity. The new feature is the used high performance uniform linear antenna (ULA) for extracting both the reference and the echo signal in a software defined radar. The signal processing steps used by the proposed scheme are detailed and the feasibility of the whole system is proved by measurements.",
"title": ""
},
{
"docid": "ab4cada23ae2142e52c98a271c128c58",
"text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.",
"title": ""
},
{
"docid": "119dd2c7eb5533ece82cff7987f21dba",
"text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.",
"title": ""
},
{
"docid": "bb94ef2ab26fddd794a5b469f3b51728",
"text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "ee732b213767471c29f12e7d00f4ded3",
"text": "The increasing interest in scene text reading in multilingual environments raises the need to recognize and distinguish between different writing systems. In this paper, we propose a novel method for script identification in scene text using triplets of local convolutional features in combination with the traditional bag-of-visual-words model. Feature triplets are created by making combinations of descriptors extracted from local patches of the input images using a convolutional neural network. This approach allows us to generate a more descriptive codeword dictionary for the bag-of-visual-words model, as the low discriminative power of weak descriptors is enhanced by other descriptors in a triplet. The proposed method is evaluated on two public benchmark datasets for scene text script identification and a public dataset for script identification in video captions. The experiments demonstrate that our method outperforms the baseline and yields competitive results on all three datasets.",
"title": ""
},
{
"docid": "7f711c94920e0bfa8917ad1b5875813c",
"text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.",
"title": ""
},
{
"docid": "d537214f407128585d6a4e6bab55a45b",
"text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.",
"title": ""
},
{
"docid": "fff9e38c618a6a644e3795bdefd74801",
"text": "Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.",
"title": ""
},
{
"docid": "04c8ed83fce5c5052a23d02082a11f00",
"text": "Usually, well-being has been measured by means of questionnaires or scales. Although most of these methods have a high level of reliability and validity, they present some limitations. In order to try to improve well-being assessment, in the present work, the authors propose a new complementary instrument: The Implicit Overall Well-Being Measure (IOWBM). The Implicit Association Test (IAT) was adapted to measure wellbeing by assessing associations of the self with well-being-related words. In the first study, the IOWBM showed good internal consistency and adequate temporal reliability. In the second study, it presented weak correlations with explicit well-being measures. The third study examined the validity of the measure, analyzing the effect of traumatic memories on implicit well-being. The results showed that people who remember a traumatic event presented low levels of implicit well-being compared with people in the control condition.",
"title": ""
},
{
"docid": "28fb1491be87cc850200eddd5011315d",
"text": "While Salsa and ChaCha are well known software oriented stream ciphers, since the work of Aumasson et al in FSE 2008 there aren’t many significant results against them. The basic model of their attack was to introduce differences in the IV bits, obtain biases after a few forward rounds, as well as to look at the Probabilistic Neutral Bits (PNBs) while reverting back. In this paper we first consider the biases in the forward rounds, and estimate an upper bound on the number of rounds till such biases can be observed. For this, we propose a hybrid model (under certain assumptions), where initially the nonlinear rounds as proposed by the designer are considered, and then we employ their linearized counterpart. The effect of reverting the rounds with the idea of PNBs is also considered. Based on the assumptions and analysis, we conclude that 12 rounds of Salsa and ChaCha should be considered sufficient for 256-bit keys under the current best known attack models.",
"title": ""
},
{
"docid": "53bed9c8e439ed9dcb64b8724a3fc389",
"text": "This paper presents the outcomes of research into an automatic classification system based on the lingual part of music. Two novel kinds of short features are extracted from lyrics using tf*idf and rhyme. Meta-learning algorithm is adapted to combine these two sets of features. Results show that our features promote the accuracy of classification and meta-learning algorithm is effective in fusing the two features.",
"title": ""
},
{
"docid": "45dfa7f6b1702942b5abfb8de920d1c2",
"text": "Loneliness is a common condition in older adults and is associated with increased morbidity and mortality, decreased sleep quality, and increased risk of cognitive decline. Assessing loneliness in older adults is challenging due to the negative desirability biases associated with being lonely. Thus, it is necessary to develop more objective techniques to assess loneliness in older adults. In this paper, we describe a system to measure loneliness by assessing in-home behavior using wireless motion and contact sensors, phone monitors, and computer software as well as algorithms developed to assess key behaviors of interest. We then present results showing the accuracy of the system in detecting loneliness in a longitudinal study of 16 older adults who agreed to have the sensor platform installed in their own homes for up to 8 months. We show that loneliness is significantly associated with both time out-of-home (β = -0.88 andp <; 0.01) and number of computer sessions (β = 0.78 and p <; 0.05). R2 for the model was 0.35. We also show the model's ability to predict out-of-sample loneliness, demonstrating that the correlation between true loneliness and predicted out-of-sample loneliness is 0.48. When compared with the University of California at Los Angeles loneliness score, the normalized mean absolute error of the predicted loneliness scores was 0.81 and the normalized root mean squared error was 0.91. These results represent first steps toward an unobtrusive, objective method for the prediction of loneliness among older adults, and mark the first time multiple objective behavioral measures that have been related to this key health outcome.",
"title": ""
}
] |
scidocsrr
|
b19be2d05a21a644912b86a5362899fa
|
Detecting Multipliers of Jihadism on Twitter
|
[
{
"docid": "d49d099d3f560584f2d080e7a1e2711f",
"text": "Dark Web forums are heavily used by extremist and terrorist groups for communication, recruiting, ideology sharing, and radicalization. These forums often have relevance to the Iraqi insurgency or Al-Qaeda and are of interest to security and intelligence organizations. This paper presents an automated approach to sentiment and affect analysis of selected radical international Ahadist Dark Web forums. The approach incorporates a rich textual feature representation and machine learning techniques to identify and measure the sentiment polarities and affect intensities expressed in forum communications. The results of sentiment and affect analysis performed on two large-scale Dark Web forums are presented, offering insight into the communities and participants.",
"title": ""
},
{
"docid": "fbcc3a5535d63e5a6dfb4e66bd5d7ad5",
"text": "Jihadist groups such as ISIS are spreading online propaganda using various forms of social media such as Twitter and YouTube. One of the most common approaches to stop these groups is to suspend accounts that spread propaganda when they are discovered. This approach requires that human analysts manually read and analyze an enormous amount of information on social media. In this work we make a first attempt to automatically detect messages released by jihadist groups on Twitter. We use a machine learning approach that classifies a tweet as containing material that is supporting jihadists groups or not. Even tough our results are preliminary and more tests needs to be carried out we believe that results indicate that an automated approach to aid analysts in their work with detecting radical content on social media is a promising way forward. It should be noted that an automatic approach to detect radical content should only be used as a support tool for human analysts in their work.",
"title": ""
}
] |
[
{
"docid": "c504800ce08654fb5bf49356d2f7fce3",
"text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.",
"title": ""
},
{
"docid": "c1d0497c80ffd6cf84b5ce5b09d841af",
"text": "Besides sensory characteristics of food, food-evoked emotion is a crucial factor in predicting consumer's food preference and therefore in developing new products. Many measures have been developed to assess food-evoked emotions. The aim of this literature review is (i) to give an exhaustive overview of measures used in current research and (ii) to categorize these methods along measurement level (physiological, behavioral, and cognitive) and emotional processing level (unconscious sensory, perceptual/early cognitive, and conscious/decision making) level. This 3 × 3 categorization may help researchers to compile a set of complementary measures (\"toolbox\") for their studies. We included 101 peer-reviewed articles that evaluate consumer's emotions and were published between 1997 and 2016, providing us with 59 different measures. More than 60% of these measures are based on self-reported, subjective ratings and questionnaires (cognitive measurement level) and assess the conscious/decision-making level of emotional processing. This multitude of measures and their overrepresentation in a single category hinders the comparison of results across studies and building a complete multi-faceted picture of food-evoked emotions. We recommend (1) to use widely applied, validated measures only, (2) to refrain from using (highly correlated) measures from the same category but use measures from different categories instead, preferably covering all three emotional processing levels, and (3) to acquire and share simultaneously collected physiological, behavioral, and cognitive datasets to improve the predictive power of food choice and other models.",
"title": ""
},
{
"docid": "0d23f763744f39614ecef498ed4c2c31",
"text": "Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that welltrained DNNs can be easily misled by adversarial examples (AE) – the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from expensive retraining cost and demonstrate marginal robustness improvement against the stateof-the-art attacks like CW family adversarial examples. In this work, we propose a novel low-cost “feature distillation” strategy to purify the adversarial input perturbations of AEs by redesigning the popular image compression framework “JPEG”. The proposed “feature distillation” wisely maximizes the malicious feature loss of AE perturbations during image compression while suppressing the distortions of benign features essential for high accurate DNN classification. Experimental results show that our method can drastically reduce the success rate of various state-of-the-art AE attacks by ∼ 60% on average for both CIFAR-10 and ImageNet benchmarks without harming the testing accuracy, outperforming existing solutions like default JPEG compression and “feature squeezing”.",
"title": ""
},
{
"docid": "18316f4f3928fd49f852090e2396ff77",
"text": "OBJECTIVE\nTo provide a conceptual and clinical review of the physiology of the venous system as it is related to cardiac function in health and disease.\n\n\nDATA\nAn integration of venous and cardiac physiology under normal conditions, critical illness, and resuscitation.\n\n\nSUMMARY\nThe usual clinical teaching of cardiac physiology focuses on left ventricular pathophysiology and pathology. Due to the wide array of shock states dealt with by intensivists, an integrated approach that takes into account the function of the venous system and its interaction with the right heart may be more useful. In part II of this two-part review, we describe the physiology of venous return and its interaction with the right heart function as it relates to mechanical ventilation and various shock states including hypovolemic, cardiogenic, obstructive, and septic shock. In particular, we demonstrate how these shock states perturb venous return/right heart interactions. We also show how compensatory mechanisms and therapeutic interventions can tend to return venous return and cardiac output to appropriate values.\n\n\nCONCLUSION\nAn improved understanding of the role of the venous system in pathophysiologic conditions will allow intensivists to better appreciate the complex circulatory physiology of shock and related therapies. This should enable improved hemodynamic management of this disorder.",
"title": ""
},
{
"docid": "00828c9f8d8e0ef17505973d84f92dbf",
"text": "A new modeling approach for the design of planar multilayered meander-line polarizers is presented. For the first time a multielement equivalent circuit is adopted to characterize the meander-line unit cell. This equivalent circuit significantly improves the bandwidth performance with respect to the state-of-the-art. In addition to this, a polynomial interpolation matrix approach is employed to take into account the dependence on the meander-line geometrical parameters. This leads to an accuracy comparable to that of a full-wave analysis. At the same time, the computational cost is minimized so as to make this model suitable for real-time tuning and fast optimizations. A four-layer polarizer is designed to validate the presented modeling procedure. Comparison with full-wave simulations confirms its high accuracy over a wide frequency range.",
"title": ""
},
{
"docid": "f2a8396de66221e2a98d8e5fcb74d90d",
"text": "Clothoid splines are gaining popularity as a curve representation due to their intrinsically pleasing curvature, which varies piecewise linearly over arc length. However, constructing them from hand-drawn strokes remains difficult. Building on recent results, we describe a novel algorithm for approximating a sketched stroke with a fair (i.e., visually pleasing) clothoid spline. Fairness depends on proper segmentation of the stroke into curve primitives — lines, arcs, and clothoids. Our main idea is to cast the segmentation as a shortest path problem on a carefully constructed weighted graph. The nodes in our graph correspond to a vastly overcomplete set of curve primitives that are fit to every subsegment of the sketch, and edges correspond to transitions of a specified degree of continuity between curve primitives. The shortest path in the graph corresponds to a desirable segmentation of the input curve. Once the segmentation is found, the primitives are fit to the curve using non-linear constrained optimization. We demonstrate that the curves produced by our method have good curvature profiles, while staying close to the user sketch.",
"title": ""
},
{
"docid": "6abd94555aa69d5d27f75db272952a0e",
"text": "Text recognition in images is an active research area which attempts to develop a computer application with the ability to automatically read the text from images. Nowadays there is a huge demand of storing the information available on paper documents in to a computer readable form for later use. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. However to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved are: font characteristics of the characters in paper documents and quality of the images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus, there is a need of character recognition mechanisms to perform document image analysis which transforms documents in paper format to electronic format. In this paper, we have reviewed and analyzed different methods for text recognition from images. The objective of this review paper is to summarize the well-known methods for better understanding of the reader.",
"title": ""
},
{
"docid": "39bae837ee110a9ccb572ab50c91b624",
"text": "UNLABELLED\nCombined cup and stem anteversion in THA based on femoral anteversion has been suggested as a method to compensate for abnormal femoral anteversion. We investigated the combined anteversion technique using computer navigation. In 47 THAs, the surgeon first estimated the femoral broach anteversion and validated the position by computer navigation. The broach was then measured with navigation. The navigation screen was blocked while the surgeon estimated the anteversion of the broach. This provided two estimates of stem anteversion. The navigated stem anteversion was validated by postoperative CT scans. All cups were implanted using navigation alone. We determined precision (the reproducibility) and bias (how close the average test number is to the true value) of the stem position. Comparing the surgeon estimate to navigation anteversion, the precision of the surgeon was 16.8 degrees and bias was 0.2 degrees ; comparing the navigation of the stem to postoperative CT anteversion, the precision was 4.8 degrees and bias was 0.2 degrees , meaning navigation is accurate. Combined anteversion by postoperative CT scan was 37.6 degrees +/- 7 degrees (standard deviation) (range, 19 degrees -50 degrees ). The combined anteversion with computer navigation was within the safe zone of 25 degrees to 50 degrees for 45 of 47 (96%) hips. Femoral stem anteversion had a wide variability.\n\n\nLEVEL OF EVIDENCE\nLevel II, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence.",
"title": ""
},
{
"docid": "245d0644ff531177db0a09c1ba3f303d",
"text": "This paper presents, a new current mode four-quadrant CMOS analog multiplier/divider based on dual translinear loops. Compared with the previous works this circuit has a simpler structure resulting in lower power consumption and higher frequency response. Simulation results, performed using HSPICE with 0.25um technology, confirm performance of the proposed circuit.",
"title": ""
},
{
"docid": "fadabf5ba39d455ca59cc9dc0b37f79b",
"text": "We propose a speech enhancement algorithm based on single- and multi-microphone processing techniques. The core of the algorithm estimates a time-frequency mask which represents the target speech and use masking-based beamforming to enhance corrupted speech. Specifically, in single-microphone processing, the received signals of a microphone array are treated as individual signals and we estimate a mask for the signal of each microphone using a deep neural network (DNN). With these masks, in multi-microphone processing, we calculate a spatial covariance matrix of noise and steering vector for beamforming. In addition, we propose a masking-based post-filter to further suppress the noise in the output of beamforming. Then, the enhanced speech is sent back to DNN for mask re-estimation. When these steps are iterated for a few times, we obtain the final enhanced speech. The proposed algorithm is evaluated as a frontend for automatic speech recognition (ASR) and achieves a 5.05% average word error rate (WER) on the real environment test set of CHiME-3, outperforming the current best algorithm by 13.34%.",
"title": ""
},
{
"docid": "4ae0bb75493e5d430037ba03fcff4054",
"text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.",
"title": ""
},
{
"docid": "99b4a9cc7e579972d771783adcba149e",
"text": "This article reports on a generalizable system model design that analyzes the unstructured customer reviews inside the posts about electronic products on social networking websites. For the purposes of this study, posts on social networking websites have been mined and the keywords are extracted from such posts. The extracted keywords and the ontologies of electronic products and emotions form the base for the sentiment analysis model, which is used to understand online consumer behavior in the market. In order to enhance system accuracy, negating and enhancing terms are considered in the proposed model. As a result, it allows online businesses to use query to analyze the market trends of each product accurately based on the comments from user posts in social networking sites.",
"title": ""
},
{
"docid": "aa9a73ce240dd792ac815405b8ac3bc7",
"text": "This paper describes a real-time Speech Emotion Recognition (SER) task formulated as an image classification problem. The shift to an image classification paradigm provided the advantage of using an existing Deep Neural Network (AlexNet) pre-trained on a very large number of images, and thus eliminating the need for a lengthy network training process. Two alternative multi-class SER systems, AlexNet-SVM and FTAlexNet, were investigated. Both systems were shown to achieve state-of-the-art results when tested on a popular Berlin Emotional Speech (EMO-DB) database. Transformation from speech to image classification was achieved by creating RGB images depicting speech spectrograms. The ALEXNet-SVM method passes the spectrogram images as inputs to a pre-trained Convolutional Neural Network (AlexNet) to provide features for the Support Vector Machine (SVM) classifier, whereas the FTAlexNet method simply applies the images to a fine tuned AlexNet to provide emotional class labels. The FTAlexNet offers slightly higher accuracy compared to the AlexNet-SVM, while the AlexNet-SVM requires a lower number of computations due to the elimination of the neural network training procedure. A real-time demo is given on: https://www.youtube.com/watch?v=fuMpF3cUqDU&t=6s.",
"title": ""
},
{
"docid": "ac88402eb0ce5c4edc5b28655991e3da",
"text": "Reinforcement learning algorithms enable an agent to optimize its behavior from interacting with a specific environment. Although some very successful applications of reinforcement learning algorithms have been developed, it is still an open research question how to scale up to large dynamic environments. In this paper we will study the use of reinforcement learning on the popular arcade video game Ms. Pac-Man. In order to let Ms. Pac-Man quickly learn, we designed particular smart feature extraction algorithms that produce higher-order inputs from the game-state. These inputs are then given to a neural network that is trained using Q-learning. We constructed higher-order features which are relative to the action of Ms. Pac-Man. These relative inputs are then given to a single neural network which sequentially propagates the action-relative inputs to obtain the different Q-values of different actions. The experimental results show that this approach allows the use of only 7 input units in the neural network, while still quickly obtaining very good playing behavior. Furthermore, the experiments show that our approach enables Ms. Pac-Man to successfully transfer its learned policy to a different maze on which it was not trained before.",
"title": ""
},
{
"docid": "4c9313e27c290ccc41f3874108593bf6",
"text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.",
"title": ""
},
{
"docid": "20c2aea79b80c93783aa3f82a8aa2625",
"text": "The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.",
"title": ""
},
{
"docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79",
"text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.",
"title": ""
},
{
"docid": "5a912359338b6a6c011e0d0a498b3e8d",
"text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.",
"title": ""
},
{
"docid": "9f786e59441784d821da00d07d2fc42e",
"text": "Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.",
"title": ""
},
{
"docid": "52fe696242f399d830d0a675bd766128",
"text": "Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an \"intentional stance\" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a \"teleological stance\" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.",
"title": ""
}
] |
scidocsrr
|
d329a8777725e85d84e5ef4d16d84a8c
|
Modelling Competitive Sports: Bradley-Terry-Élő Models for Supervised and On-Line Learning of Paired Competition Outcomes
|
[
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
}
] |
[
{
"docid": "bceb4e66638fba85a5b5d94e8546e4ee",
"text": "Data grows at the impressive rate of 50% per year, and 75% of the digital world is a copy! Although keeping multiple copies of data is necessary to guarantee their availability and long term durability, in many situations the amount of data redundancy is immoderate. By keeping a single copy of repeated data, data deduplication is considered as one of the most promising solutions to reduce the storage costs, and improve users experience by saving network bandwidth and reducing backup time. However, this solution must now solve many security issues to be completely satisfying. In this paper we target the attacks from malicious clients that are based on the manipulation of data identifiers and those based on backup time and network traffic observation. We present a deduplication scheme mixing an intraand an inter-user deduplication in order to build a storage system that is secure against the aforementioned type of attacks by controlling the correspondence between files and their identifiers, and making the inter-user deduplication unnoticeable to clients using deduplication proxies. Our method provides global storage space savings, per-client bandwidth network savings between clients and deduplication proxies, and global network bandwidth savings between deduplication proxies and the storage server. The evaluation of our solution compared to a classic system shows that the overhead introduced by our scheme is mostly due to data encryption which is necessary to ensure confidentiality.",
"title": ""
},
{
"docid": "f95e568513847369eba15e154461a3c1",
"text": "We address the problem of identifying the domain of onlinedatabases. More precisely, given a set F of Web forms automaticallygathered by a focused crawler and an online databasedomain D, our goal is to select from F only the formsthat are entry points to databases in D. Having a set ofWebforms that serve as entry points to similar online databasesis a requirement for many applications and techniques thataim to extract and integrate hidden-Web information, suchas meta-searchers, online database directories, hidden-Webcrawlers, and form-schema matching and merging.We propose a new strategy that automatically and accuratelyclassifies online databases based on features that canbe easily extracted from Web forms. By judiciously partitioningthe space of form features, this strategy allows theuse of simpler classifiers that can be constructed using learningtechniques that are better suited for the features of eachpartition. Experiments using real Web data in a representativeset of domains show that the use of different classifiersleads to high accuracy, precision and recall. This indicatesthat our modular classifier composition provides an effectiveand scalable solution for classifying online databases.",
"title": ""
},
{
"docid": "fb46f67ba94cb4d7dd7620e2bdf5f00e",
"text": "We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power.\n Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.\n We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.",
"title": ""
},
{
"docid": "44368062de68f6faed57d43b8e691e35",
"text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.",
"title": ""
},
{
"docid": "b66a2ce976a145827b5b9a5dd2ad2495",
"text": "Compared to previous head-mounted displays, the compact and low-cost Oculus Rift has claimed to offer improved virtual reality experiences. However, how and what kinds of user experiences are encountered by people when using the Rift in actual gameplay has not been examined. We present an exploration of 10 participants' experiences of playing a first-person shooter game using the Rift. Despite cybersickness and a lack of control, participants experienced heightened experiences, a richer engagement with passive game elements, a higher degree of flow and a deeper immersion on the Rift than on a desktop setup. Overly demanding movements, such as the large range of head motion required to navigate the game environment were found to adversely affect gaming experiences. Based on these and other findings, we also present some insights for designing games for the Rift.",
"title": ""
},
{
"docid": "bb770a0cb686fbbb4ea1adb6b4194967",
"text": "Parental refusal of vaccines is a growing a concern for the increased occurrence of vaccine preventable diseases in children. A number of studies have looked into the reasons that parents refuse, delay, or are hesitant to vaccinate their child(ren). These reasons vary widely between parents, but they can be encompassed in 4 overarching categories. The 4 categories are religious reasons, personal beliefs or philosophical reasons, safety concerns, and a desire for more information from healthcare providers. Parental concerns about vaccines in each category lead to a wide spectrum of decisions varying from parents completely refusing all vaccinations to only delaying vaccinations so that they are more spread out. A large subset of parents admits to having concerns and questions about childhood vaccinations. For this reason, it can be helpful for pharmacists and other healthcare providers to understand the cited reasons for hesitancy so they are better prepared to educate their patients' families. Education is a key player in equipping parents with the necessary information so that they can make responsible immunization decisions for their children.",
"title": ""
},
{
"docid": "3b06ce783d353cff3cdbd9a60037162e",
"text": "The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the ‘rules’ for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.",
"title": ""
},
{
"docid": "d3e35963e85ade6e3e517ace58cb3911",
"text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.",
"title": ""
},
{
"docid": "96ee31337d66b8ccd3876c1575f9b10c",
"text": "Although different modeling techniques have been proposed during the last 300 years, the differential equation formalism proposed by Newton and Leibniz has been the tool of choice for modeling and problem solving Taylor (1996); Wainer (2009). Differential equations provide a formal mathematical method (sometimes also called an analytical method) for studying the entity of interest. Computational methods based on differential equations could not be easily applied in studying human-made dynamic systems (e.g., traffic controllers, robotic arms, automated factories, production plants, computer networks, VLSI circuits). These systems are usually referred to as discrete event systems because their states do not change continuously but, rather, because of the occurrence of events. This makes them asynchronous, inherently concurrent, and highly nonlinear, rendering their modeling and simulation different from that used in traditional approaches. In order to improve the model definition for this class of systems, a number of techniques were introduced, including Petri Nets, Finite State Machines, min-max algebra, Timed Automata, etc. Banks & Nicol. (2005); Cassandras (1993); Cellier & Kofman. (2006); Fishwick (1995); Law & Kelton (2000); Toffoli & Margolus. (1987). Wireless Sensor Network (WSN) is a discrete event system which consists of a network of sensor nodes equipped with sensing, computing, power, and communication modules to monitor certain phenomenon such as environmental data or object tracking Zhao & Guibas (2004). Emerging applications of wireless sensor networks are comprised of asset and warehouse *madani@ciit.net.pk †jawhaikaz@ciit.net.pk ‡mahlknecht@ict.tuwien.ac.at 1",
"title": ""
},
{
"docid": "a412f5facafdb2479521996c05143622",
"text": "A temperature and supply independent on-chip reference relaxation oscillator for low voltage design is described. The frequency of oscillation is mainly a function of a PVT robust biasing current. The comparator for the relaxation oscillator is replaced with a high speed common-source stage to eliminate the temperature dependency of the comparator delay. The current sources and voltages are biased by a PVT robust references derived from a bandgap circuitry. This oscillator is designed in TSMC 65 nm CMOS process to operate with a minimum supply voltage of 1.4 V and consumes 100 μW at 157 MHz frequency of oscillation. The oscillator exhibits frequency variation of 1.6% for supply changes from 1.4 V to 1.9 V, and ±1.2% for temperature changes from 20°C to 120°C.",
"title": ""
},
{
"docid": "dc33d2edcfb124af607bcb817589f6e9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "9734f4395c306763e6cc5bf13b0ca961",
"text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.",
"title": ""
},
{
"docid": "2575bad473ef55281db460617e0a37c8",
"text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.",
"title": ""
},
{
"docid": "f38554695eb3ca5b6d62b1445d8826b7",
"text": "Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. VINE works seamlessly with a breadth of neuroevolution algorithms, including ES and GA, and addresses the difficulty of observing the underlying dynamics of the learning process through an interactive visualization of the evolving agent's behavior characterizations over generations. As neuroevolution scales to neural networks with millions or more connections, visualization tools like VINE that offer fresh insight into the underlying dynamics of evolution become increasingly valuable and important for inspiring new innovations and applications.",
"title": ""
},
{
"docid": "ab2159730f00662ba29e25a0e27d1799",
"text": "This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "5b57eb0b695a1c85d77db01e94904fb1",
"text": "Depth map super-resolution is an emerging topic due to the increasing needs and applications using RGB-D sensors. Together with the color image, the corresponding range data provides additional information and makes visual analysis tasks more tractable. However, since the depth maps captured by such sensors are typically with limited resolution, it is preferable to enhance its resolution for improved recognition. In this paper, we present a novel joint trilateral filtering (JTF) algorithm for solving depth map super-resolution (SR) problems. Inspired by bilateral filtering, our JTF utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels. Our proposed further integrates local gradient information of the depth map when synthesizing its HR output, which alleviates textural artifacts like edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.",
"title": ""
},
{
"docid": "61ba52f205c8b497062995498816b60f",
"text": "The past century experienced a proliferation of retail formats in the marketplace. However, as a new century begins, these retail formats are being threatened by the emergence of a new kind of store, the online or Internet store. From being almost a novelty in 1995, online retailing sales were expected to reach $7 billion by 2000 [9]. In this increasngly timeconstrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money [6]. Why is such counterintuitive phenomena prevailing? The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. Having said this, however, we must point out that consumers are buying goods on the Internet. This is reflected in the fact that total sales on the Internet are on the increase [8, 11]. Who are the consumers that are patronizing the Internet? Evidently, for them the perception of the risk associated with shopping on the Internet is low or is overshadowed by its relative convenience. This article attempts to determine why certain consumers are drawn to the Internet and why others are not. Since the pioneering research done by Becker [3], it has been accepted that the consumer maximizes his utility subject to not only income constraints but also time constraints. A consumer seeks out his best decision given that he has a limited budget of time and money. While purchasing a product from a store, a consumer has to expend both money and time. Therefore, the consumer patronizes the retail store where his total costs or the money and time spent in the entire process are the least. Since the util-",
"title": ""
},
{
"docid": "1c1988ae64bef3475f36eceaffda0b7d",
"text": "Home Office (Grant number: PTA-033-2005-00028). We gratefully acknowledge the three anonymous reviewers, whose comments and suggestions improved an earlier version of this paper. Criminologists have long contended that neighborhoods are important determinants of how individuals perceive their risk of criminal victimization. Yet, despite the theoretical importance and policy-relevance of these claims, the empirical evidence-base is surprisingly thin and inconsistent. Drawing on data from a national probability sample of individuals, linked to independent measures of neighborhood demographic characteristics, visual signs of physical disorder, and reported crime, we test four hypotheses about the mechanisms through which neighborhoods influence fear of crime. Our large sample size, analytical approach and the independence of our empirical measures enable us to overcome some of the limitations that have hampered much previous research into this question. We find that neighborhood structural characteristics, visual signs of disorder, and recorded crime all have direct and independent effects on individual level fear of crime. Additionally, we demonstrate that individual differences in fear of crime are strongly moderated by neighborhood socioeconomic characteristics; between group differences in expressed fear of crime are both exacerbated and ameliorated by the characteristics of the areas in which people live. interests include criminal statistics, neighborhood effects, missing data problems, and survey methodology. Methods at the University of Southampton. His research interests are in the areas of survey methodology, statistical methods, public opinion, and political behaviour.",
"title": ""
}
] |
scidocsrr
|
5df68dcfb86b34f85a01916e74852a7b
|
Attending to the present: mindfulness meditation reveals distinct neural modes of self-reference.
|
[
{
"docid": "c6e1c8aa6633ec4f05240de1a3793912",
"text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.",
"title": ""
},
{
"docid": "a55eed627afaf39ee308cc9e0e10a698",
"text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.",
"title": ""
},
{
"docid": "4b284736c51435f9ab6f52f174dc7def",
"text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.",
"title": ""
},
{
"docid": "34257e8924d8f9deec3171589b0b86f2",
"text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.",
"title": ""
}
] |
[
{
"docid": "5e453defd762bb4ecfae5dcd13182b4a",
"text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.",
"title": ""
},
{
"docid": "9ff76c8500a15d1c9b4a980b37bca505",
"text": "The thesis is about linear genetic programming (LGP), a machine learning approach that evolves computer programs as sequences of imperative instructions. Two fundamental differences to the more common tree-based variant (TGP) may be identified. These are the graph-based functional structure of linear genetic programs, on the one hand, and the existence of structurally noneffective code, on the other hand. The two major objectives of this work comprise (1) the development of more advanced methods and variation operators to produce better and more compact program solutions and (2) the analysis of general EA/GP phenomena in linear GP, including intron code, neutral variations, and code growth, among others. First, we introduce efficient algorithms for extracting features of the imperative and functional structure of linear genetic programs. In doing so, especially the detection and elimination of noneffective code during runtime will turn out as a powerful tool to accelerate the time-consuming step of fitness evaluation in GP. Variation operators are discussed systematically for the linear program representation. We will demonstrate that so called effective instruction mutations achieve the best performance in terms of solution quality. These mutations operate only on the (structurally) effective code and restrict the mutation step size to one instruction. One possibility to further improve their performance is to explicitly increase the probability of neutral variations. As a second, more time-efficient alternative we explicitly control the mutation step size on the effective code (effective step size). Minimum steps do not allow more than one effective instruction to change its effectiveness status. That is, only a single node may be connected to or disconnected from the effective graph component. It is an interesting phenomenon that, to some extent, the effective code becomes more robust against destructions over the generations already implicitly. A special concern of this thesis is to convince the reader that there are some serious arguments for using a linear representation. In a crossover-based comparison LGP has been found superior to TGP over a set of benchmark problems. Furthermore, linear solutions turned out to be more compact than tree solutions due to (1) multiple usage of subgraph results and (2) implicit parsimony pressure by structurally noneffective code. The phenomenon of code growth is analyzed for different linear genetic operators. When applying instruction mutations exclusively almost only neutral variations may be held responsible for the emergence and propagation of intron code. It is noteworthy that linear genetic programs may not grow if all neutral variation effects are rejected and if the variation step size is minimum. For the same reasons effective instruction mutations realize an implicit complexity control in linear GP which reduces a possible negative effect of code growth to a minimum. Another noteworthy result in this context is that program size is strongly increased by crossover while it is hardly influenced by mutation even if step sizes are not explicitly restricted.",
"title": ""
},
{
"docid": "664b9bb1f132a87e2f579945a31852b7",
"text": "Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains—astronomy and molecular biology. Introduction",
"title": ""
},
{
"docid": "ddff0a3c6ed2dc036cf5d6b93d2da481",
"text": "Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).",
"title": ""
},
{
"docid": "89dbc16a2510e3b0e4a248f428a9ffc0",
"text": "Complex networks are ubiquitous in our daily life, with the World Wide Web, social networks, and academic citation networks being some of the common examples. It is well understood that modeling and understanding the network structure is of crucial importance to revealing the network functions. One important problem, known as community detection, is to detect and extract the community structure of networks. More recently, the focus in this research topic has been switched to the detection of overlapping communities. In this paper, based on the matrix factorization approach, we propose a method called bounded nonnegative matrix tri-factorization (BNMTF). Using three factors in the factorization, we can explicitly model and learn the community membership of each node as well as the interaction among communities. Based on a unified formulation for both directed and undirected networks, the optimization problem underlying BNMTF can use either the squared loss or the generalized KL-divergence as its loss function. In addition, to address the sparsity problem as a result of missing edges, we also propose another setting in which the loss function is defined only on the observed edges. We report some experiments on real-world datasets to demonstrate the superiority of BNMTF over other related matrix factorization methods.",
"title": ""
},
{
"docid": "fd0defe3aaabd2e27c7f9d3af47dd635",
"text": "A fast test for triangle-triangle intersection by computing signed vertex-plane distances (sufficient if one triangle is wholly to one side of the other) and signed line-line distances of selected edges (otherwise) is presented. This algorithm is faster than previously published algorithms and the code is available online.",
"title": ""
},
{
"docid": "0e600cedfbd143fe68165e20317c46d4",
"text": "We propose an efficient real-time automatic license plate recognition (ALPR) framework, particularly designed to work on CCTV video footage obtained from cameras that are not dedicated to the use in ALPR. At present, in license plate detection, tracking and recognition are reasonably well-tackled problems with many successful commercial solutions being available. However, the existing ALPR algorithms are based on the assumption that the input video will be obtained via a dedicated, high-resolution, high-speed camera and is/or supported by a controlled capture environment, with appropriate camera height, focus, exposure/shutter speed and lighting settings. However, typical video forensic applications may require searching for a vehicle having a particular number plate on noisy CCTV video footage obtained via non-dedicated, medium-to-low resolution cameras, working under poor illumination conditions. ALPR in such video content faces severe challenges in license plate localization, tracking and recognition stages. This paper proposes a novel approach for efficient localization of license plates in video sequence and the use of a revised version of an existing technique for tracking and recognition. A special feature of the proposed approach is that it is intelligent enough to automatically adjust for varying camera distances and diverse lighting conditions, a requirement for a video forensic tool that may operate on videos obtained by a diverse set of unspecified, distributed CCTV cameras.",
"title": ""
},
{
"docid": "75952b1d2c9c2f358c4c2e3401a00245",
"text": "This book is an outstanding contribution to the philosophical study of language and mind, by one of the most influential thinkers of our time. In a series of penetrating essays, Noam Chomsky cuts through the confusion and prejudice which has infected the study of language and mind, bringing new solutions to traditional philosophical puzzles and fresh perspectives on issues of general interest, ranging from the mind–body problem to the unification of science. Using a range of imaginative and deceptively simple linguistic analyses, Chomsky argues that there is no coherent notion of “language” external to the human mind, and that the study of language should take as its focus the mental construct which constitutes our knowledge of language. Human language is therefore a psychological, ultimately a “biological object,” and should be analysed using the methodology of the natural sciences. His examples and analyses come together in this book to give a unique and compelling perspective on language and the mind.",
"title": ""
},
{
"docid": "3bff3136e5e2823d0cca2f864fe9e512",
"text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.",
"title": ""
},
{
"docid": "7c3b5470398a219875ba1a6443119c8e",
"text": "Semantic role labeling (SRL) identifies the predicate-argument structure in text with semantic labels. It plays a key role in understanding natural language. In this paper, we present POLYGLOT, a multilingual semantic role labeling system capable of semantically parsing sentences in 9 different languages from 4 different language groups. The core of POLYGLOT are SRL models for individual languages trained with automatically generated Proposition Banks (Akbik et al., 2015). The key feature of the system is that it treats the semantic labels of the English Proposition Bank as “universal semantic labels”: Given a sentence in any of the supported languages, POLYGLOT applies the corresponding SRL and predicts English PropBank frame and role annotation. The results are then visualized to facilitate the understanding of multilingual SRL with this unified semantic representation.",
"title": ""
},
{
"docid": "5bca58cbd1ef80ebf040529578d2a72a",
"text": "In this letter, a printable chipless tag with electromagnetic code using split ring resonators is proposed. A 4 b chipless tag that can be applied to paper/plastic-based items such as ID cards, tickets, banknotes and security documents is designed. The chipless tag generates distinct electromagnetic characteristics by various combinations of a split ring resonator. Furthermore, a reader system is proposed to digitize electromagnetic characteristics and convert chipless tag to electromagnetic code.",
"title": ""
},
{
"docid": "b2c03d8e54a2a6840f6688ab9682e24b",
"text": "Path following and follow-the-leader motion is particularly desirable for minimally-invasive surgery in confined spaces which can only be reached using tortuous paths, e.g. through natural orifices. While path following and followthe- leader motion can be achieved by hyper-redundant snake robots, their size is usually not applicable for medical applications. Continuum robots, such as tendon-driven or concentric tube mechanisms, fulfill the size requirements for minimally invasive surgery, but yet follow-the-leader motion is not inherently provided. In fact, parameters of the manipulator's section curvatures and translation have to be chosen wisely a priori. In this paper, we consider a tendon-driven continuum robot with extensible sections. After reformulating the forward kinematics model, we formulate prerequisites for follow-the-leader motion and present a general approach to determine a sequence of robot configurations to achieve follow-the-leader motion along a given 3D path. We evaluate our approach in a series of simulations with 3D paths composed of constant curvature arcs and general 3D paths described by B-spline curves. Our results show that mean path errors <;0.4mm and mean tip errors <;1.6mm can theoretically be achieved for constant curvature paths and <;2mm and <;3.1mm for general B-spline curves respectively.",
"title": ""
},
{
"docid": "25bcbb44c843d71b7422905e9dbe1340",
"text": "INTRODUCTION\nThe purpose of this study was to evaluate the effect of using the transverse analysis developed at Case Western Reserve University (CWRU) in Cleveland, Ohio. The hypotheses were based on the following: (1) Does following CWRU's transverse analysis improve the orthodontic results? (2) Does following CWRU's transverse analysis minimize the active treatment duration?\n\n\nMETHODS\nA retrospective cohort research study was conducted on a randomly selected sample of 100 subjects. The sample had CWRU's analysis performed retrospectively, and the sample was divided according to whether the subjects followed what CWRU's transverse analysis would have suggested. The American Board of Orthodontics discrepancy index was used to assess the pretreatment records, and quality of the result was evaluated using the American Board of Orthodontics cast/radiograph evaluation. The Mann-Whitney test was used for the comparison.\n\n\nRESULTS\nCWRU's transverse analysis significantly improved the total cast/radiograph evaluation scores (P = 0.041), especially the buccolingual inclination component (P = 0.001). However, it did not significantly affect treatment duration (P = 0.106).\n\n\nCONCLUSIONS\nCWRU's transverse analysis significantly improves the orthodontic results but does not have significant effects on treatment duration.",
"title": ""
},
{
"docid": "e81f1caa398de7f56a70cc4db18d58db",
"text": "UNLABELLED\nThis study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.\n\n\nIN CONCLUSION\n1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.",
"title": ""
},
{
"docid": "31cd031708856490f756d4399d7709d5",
"text": "Inspecting objects in the industry aims to guarantee product quality allowing problems to be corrected and damaged products to be discarded. Inspection is also widely used in railway maintenance, where wagon components need to be checked due to efficiency and safety concerns. In some organizations, hundreds of wagons are inspected visually by a human inspector, which leads to quality issues and safety risks for the inspectors. This paper describes a wagon component inspection approach using Deep Learning techniques to detect a particular damaged component: the shear pad. We compared our approach for convolutional neural networks with the state of art classification methods to distinguish among three shear pads conditions: absent, damaged, and undamaged shear pad. Our results are very encouraging showing empirical evidence that our approach has better performance than other classification techniques.",
"title": ""
},
{
"docid": "a697f85ad09699ddb38994bd69b11103",
"text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.",
"title": ""
},
{
"docid": "d8f8931af18f3e0a6424916dfac717ee",
"text": "Twitter data have brought new opportunities to know what happens in the world in real-time, and conduct studies on the human subjectivity on a diversity of issues and topics at large scale, which would not be feasible using traditional methods. However, as well as these data represent a valuable source, a vast amount of noise can be found in them. Because of the brevity of texts and the widespread use of mobile devices, non-standard word forms abound in tweets, which degrade the performance of Natural Language Processing tools. In this paper, a lexical normalization system of tweets written in Spanish is presented. The system suggests normalization candidates for out-of-vocabulary (OOV) words based on similarity of graphemes or phonemes. Using contextual information, the best correction candidate for a word is selected. Experimental results show that the system correctly detects OOV words and the most of cases suggests the proper corrections. Together with this, results indicate a room for improvement in the correction candidate selection. Compared with other methods, the overall performance of the system is above-average and competitive to different approaches in the literature.",
"title": ""
},
{
"docid": "da5c1445453853e23477bfea79fd4605",
"text": "This paper presents an 8-bit column-driver IC with improved deviation of voltage output (DVO) for thin-film-transistor (TFT) liquid crystal displays (LCDs). The various DVO results contributed by the output buffer of a column driver are predicted by using Monte Carlo simulation under different variation conditions. Relying on this prediction, a better compromise can be achieved between DVO and chip size. This work was implemented using 0.35-μm CMOS technology and the measured maximum DVO is only 6.2 mV.",
"title": ""
},
{
"docid": "f598677e19789c92c31936440e709c4d",
"text": "Temporal datasets, in which data evolves continuously, exist in a wide variety of applications, and identifying anomalous or outlying objects from temporal datasets is an important and challenging task. Different from traditional outlier detection, which detects objects that have quite different behavior compared with the other objects, temporal outlier detection tries to identify objects that have different evolutionary behavior compared with other objects. Usually objects form multiple communities, and most of the objects belonging to the same community follow similar patterns of evolution. However, there are some objects which evolve in a very different way relative to other community members, and we define such objects as evolutionary community outliers. This definition represents a novel type of outliers considering both temporal dimension and community patterns. We investigate the problem of identifying evolutionary community outliers given the discovered communities from two snapshots of an evolving dataset. To tackle the challenges of community evolution and outlier detection, we propose an integrated optimization framework which conducts outlier-aware community matching across snapshots and identification of evolutionary outliers in a tightly coupled way. A coordinate descent algorithm is proposed to improve community matching and outlier detection performance iteratively. Experimental results on both synthetic and real datasets show that the proposed approach is highly effective in discovering interesting evolutionary community outliers.",
"title": ""
},
{
"docid": "04271124470c613da4dd4136ceb61a18",
"text": "In this paper, we propose the deep reinforcement relevance network (DRRN), a novel deep architecture, for handling an unbounded action space with applications to language understanding for text-based games. For a particular class of games, a user must choose among a variable number of actions described by text, with the goal of maximizing long-term reward. In these games, the best action is typically that which fits the best to the current situation (modeled as a state in the DRRN), also described by text. Because of the exponential complexity of natural language with respect to sentence length, there is typically an unbounded set of unique actions. Therefore, it is very difficult to pre-define the action set as in the deep Q-network (DQN). To address this challenge, the DRRN extracts high-level embedding vectors from the texts that describe states and actions, respectively, and computes the inner products between the state and action embedding vectors to approximate the Q-function. We evaluate the DRRN on two popular text games, showing superior performance over the DQN.",
"title": ""
}
] |
scidocsrr
|
5a3ca7db556a984972d8a3c90fc4ba34
|
A 7-µW 2.4-GHz wake-up receiver with -80 dBm sensitivity and high co-channel interferer tolerance
|
[
{
"docid": "e30cedcb4cb99c4c3b2743c5359cf823",
"text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.",
"title": ""
}
] |
[
{
"docid": "caa30379a2d0b8be2e1b4ddf6e6602c2",
"text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).",
"title": ""
},
{
"docid": "0ef6e54d7190dde80ee7a30c5ecae0c3",
"text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.",
"title": ""
},
{
"docid": "8b0e62dd3a6241eaaa64c40728c2c259",
"text": "This thesis discusses aspects of a novel solar concentrating photovoltaic / thermal (PV/T) collector that has been designed to produce both electricity and hot water. The motivation for the development of the Combined Heat and Power Solar (CHAPS) collector is twofold: in the short term, to produce photovoltaic power and solar hot water at a cost which is competitive with other renewable energy technologies, and in the longer term, at a cost which is lower than possible with current technologies. To the author’ s knowledge, the CHAPS collector is the first PV/T system using a reflective linear concentrator with a concentration ratio in the range 20-40x. The work contained in this thesis is a thorough study of all facets of the CHAPS collector, through a combination of theoretical and experimental investigation. A theoretical discussion of the concept of ‘energy value’ is presented, with the aim of developing methodologies that could be used in optimisation studies to compare the value of electrical and thermal energy. Three approaches are discussed; thermodynamic methods, using second law concepts of energy usefulness; economic valuation of the hot water and electricity through levelised energy costs; and environmental valuation, based on the greenhouse gas emissions associated with the generation of hot water and electricity. It is proposed that the value of electrical energy and thermal energy is best compared using a simple ratio. Experimental measurement of the thermal and electrical efficiency of a CHAPS receiver was carried out for a range of operating temperatures and fluid flow rates. The effectiveness of internal fins incorporated to augment heat transfer was examined. The glass surface temperature was measured using an infrared camera, to assist in the calculation of thermal losses, and to help determine the extent of radiation absorbed in the cover materials. FEA analysis, using the software package Strand7, examines the conductive heat transfer within the receiver body to obtain a temperature profile under operating conditions. Electrical efficiency is not only affected by temperature, but by non-uniformities in the radiation flux profile. Highly non-uniform illumination across the cells was found to reduce the efficiency by about 10% relative. The radiation flux profile longitudinal to the receivers was measured by a custom-built flux scanning device. The results show significant fluctuations in the flux profile and, at worst, the minimum flux intensity is as much as 27% lower than the median. A single cell with low flux intensity limits the current and performance of all cells in series, causing a significant drop in overall output. Therefore, a detailed understanding of the causes of flux non-uniformities is essential for the design of a single-axis tracking PV trough concentrator. Simulation of the flux profile was carried out",
"title": ""
},
{
"docid": "035f780309fc777ece17cbfe4aabc01b",
"text": "The phenolic composition and antibacterial and antioxidant activities of the green alga Ulva rigida collected monthly for 12 months were investigated. Significant differences in antibacterial activity were observed during the year with the highest inhibitory effect in samples collected during spring and summer. The highest free radical scavenging activity and phenolic content were detected in U. rigida extracts collected in late winter (February) and early spring (March). The investigation of the biological properties of U. rigida fractions collected in spring (April) revealed strong antimicrobial and antioxidant activities. Ethyl acetate and n-hexane fractions exhibited substantial acetylcholinesterase inhibitory capacity with EC50 of 6.08 and 7.6 μg mL−1, respectively. The total lipid, protein, ash, and individual fatty acid contents of U. rigida were investigated. The four most abundant fatty acids were palmitic, oleic, linolenic, and eicosenoic acids.",
"title": ""
},
{
"docid": "53007a9a03b7db2d64dd03973717dc0f",
"text": "We present two children with hypoplasia of the left trapezius muscle and a history of ipsilateral transient neonatal brachial plexus palsy without documented trapezius weakness. Magnetic resonance imaging in these patients with unilateral left hypoplasia of the trapezius revealed decreased muscles in the left side of the neck and left supraclavicular region on coronal views, decreased muscle mass between the left splenius capitis muscle and the subcutaneous tissue at the level of the neck on axial views, and decreased size of the left paraspinal region on sagittal views. Three possibilities can explain the association of hypoplasia of the trapezius and obstetric brachial plexus palsy: increased vulnerability of the brachial plexus to stretch injury during delivery because of intrauterine trapezius weakness, a casual association of these two conditions, or an erroneous diagnosis of brachial plexus palsy in patients with trapezial weakness. Careful documentation of neck and shoulder movements can distinguish among shoulder weakness because of trapezius hypoplasia, brachial plexus palsy, or brachial plexus palsy with trapezius hypoplasia. Hence, we recommend precise documentation of neck movements in the initial description of patients with suspected neonatal brachial plexus palsy.",
"title": ""
},
{
"docid": "78cda62ca882bb09efc08f7d4ea1801e",
"text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven",
"title": ""
},
{
"docid": "d4fb664caa02b81909bc51291d3fafd7",
"text": "This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence. We formulate non-rigid structure from motion (nrsfm) as a global variational energy minimization problem to estimate dense low-rank smooth 3D shapes for every frame along with the camera motion matrices, given dense 2D correspondences. Unlike traditional factorization based approaches to nrsfm, which model the low-rank non-rigid shape using a fixed number of basis shapes and corresponding coefficients, we minimize the rank of the matrix of time-varying shapes directly via trace norm minimization. In conjunction with this low-rank constraint, we use an edge preserving total-variation regularization term to obtain spatially smooth shapes for every frame. Thanks to proximal splitting techniques the optimization problem can be decomposed into many point-wise sub-problems and simple linear systems which can be easily solved on GPU hardware. We show results on real sequences of different objects (face, torso, beating heart) where, despite challenges in tracking, illumination changes and occlusions, our method reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.",
"title": ""
},
{
"docid": "d32a9b0b4f470f99cdd6a57d18395582",
"text": "Information technology (IT) has a tremendous impact on the discipline of accounting by introducing new ways of retrieving and processing information a bout performance deviations and control effectiveness. This paper explores the role of IT f or managing organizational controls by analyzing value drivers for particular accounting information systems that commonly run under the label of Governance, Risk Management, and Compliance (GRC IS ). We apply a grounded theory approach to structure the value drivers of GRC IS into a resear ch f amework. In order to understand the impact of IT, we relate the GRC IS value drivers to control t heories. Practical implications include understanding GRC IS benefits beyond compliance and providing clear strategic reasoning for GRC IS depending on the individual company’s situation. Research implications include the fact that integrating IT into the context of accounting leave s several unsolved yet promising issues in theory which future research might address. This paper is the first to use the lens of organizational control theories on Governance, Risk Management, and Compli ance information systems and establishes a potentially fruitful research agenda for GRC IS as a highly relevant topic for information systems research.",
"title": ""
},
{
"docid": "3eb419ef59ad59e60bf357cfb2e69fba",
"text": "Heterogeneous information network (HIN) has been widely adopted in recommender systems due to its excellence in modeling complex context information. Although existing HIN based recommendation methods have achieved performance improvement to some extent, they have two major shortcomings. First, these models seldom learn an explicit representation for path or meta-path in the recommendation task. Second, they do not consider the mutual effect between the meta-path and the involved user-item pair in an interaction. To address these issues, we develop a novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation. We elaborately design a three-way neural interaction model by explicitly incorporating meta-path based context. To construct the meta-path based context, we propose to use a priority based sampling technique to select high-quality path instances. Our model is able to learn effective representations for users, items and meta-path based context for implementing a powerful interaction function. The co-attention mechanism improves the representations for meta-path based con- text, users and items in a mutual enhancement way. Extensive experiments on three real-world datasets have demonstrated the effectiveness of the proposed model. In particular, the proposed model performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.",
"title": ""
},
{
"docid": "c44420fbcf9e6da8e22c616a14707f45",
"text": "This article discusses the impact of artificially intelligent computers to the process of design, play and educational activities. A computational process which has the necessary intelligence and creativity to take a proactive role in such activities can not only support human creativity but also foster it and prompt lateral thinking. The argument is made both from the perspective of human creativity, where the computational input is treated as an external stimulus which triggers re-framing of humans’ routines and mental associations, but also from the perspective of computational creativity where human input and initiative constrains the search space of the algorithm, enabling it to focus on specific possible solutions to a problem rather than globally search for the optimal. The article reviews four mixed-initiative tools (for design and educational play) based on how they contribute to human-machine co-creativity. These paradigms serve different purposes, afford different human interaction methods and incorporate different computationally creative processes. Assessing how co-creativity is facilitated on a per-paradigm basis strengthens the theoretical argument and provides an initial seed for future work in the burgeoning domain of mixed-initiative interaction.",
"title": ""
},
{
"docid": "4fb6e2a74562e0442fb7bce743ccd95a",
"text": "Multiple-group confirmatory factor analysis (MG-CFA) is among the most productive extensions of structural equation modeling. Many researchers conducting cross-cultural or longitudinal studies are interested in testing for measurement and structural invariance. The aim of the present paper is to provide a tutorial in MG-CFA using the freely available R-packages lavaan, semTools, and semPlot. The combination of these packages enable a highly efficient analysis of the measurement models both for normally distributed as well as ordinal data. Data from two freely available datasets – the first with continuous the second with ordered indicators will be used to provide a walk-through the individual steps.",
"title": ""
},
{
"docid": "2595b6e8c505ae7c2799c2e5272d9e22",
"text": "High resolution imaging modalities. combined with advances in computer technology has prompted renewed interest and led to significant progress in volumetric reconstruction of medical images. Clinical assessment of this technique and whether it can provide enhanced diagnostic interpretation is currently under investigation by various medical and scientific groups. The purpose of this panel is to evaluate the clinical utility of two major 3D rendering techniques that allow the user to “fly through” and around medical data-sets.",
"title": ""
},
{
"docid": "4e7ce0c3696838f77bffd4ddeb1574a9",
"text": "Kidney segmentation in 3D CT images allows extracting useful information for nephrologists. For practical use in clinical routine, such an algorithm should be fast, automatic and robust to contrast-agent enhancement and fields of view. By combining and refining state-of-the-art techniques (random forests and template deformation), we demonstrate the possibility of building an algorithm that meets these requirements. Kidneys are localized with random forests following a coarse-to-fine strategy. Their initial positions detected with global contextual information are refined with a cascade of local regression forests. A classification forest is then used to obtain a probabilistic segmentation of both kidneys. The final segmentation is performed with an implicit template deformation algorithm driven by these kidney probability maps. Our method has been validated on a highly heterogeneous database of 233 CT scans from 89 patients. 80% of the kidneys were accurately detected and segmented (Dice coefficient > 0.90) in a few seconds per volume.",
"title": ""
},
{
"docid": "1fb8701f0ad0a9e894e4195bc02d5c25",
"text": "As graphics processing units (GPUs) are broadly adopted, running multiple applications on a GPU at the same time is beginning to attract wide attention. Recent proposals on multitasking GPUs have focused on either spatial multitasking, which partitions GPU resource at a streaming multiprocessor (SM) granularity, or simultaneous multikernel (SMK), which runs multiple kernels on the same SM. However, multitasking performance varies heavily depending on the resource partitions within each scheme, and the application mixes. In this paper, we propose GPU Maestro that performs dynamic resource management for efficient utilization of multitasking GPUs. GPU Maestro can discover the best performing GPU resource partition exploiting both spatial multitasking and SMK. Furthermore, dynamism within a kernel and interference between the kernels are automatically considered because GPU Maestro finds the best performing partition through direct measurements. Evaluations show that GPU Maestro can improve average system throughput by 20.2% and 13.9% over the baseline spatial multitasking and SMK, respectively.",
"title": ""
},
{
"docid": "ee96b4c7d15008f4b8831ecf2d337b1d",
"text": "This paper proposes the identification of regions of interest in biospeckle patterns using unsupervised neural networks of the type Self-Organizing Maps. Segmented images are obtained from the acquisition and processing of laser speckle sequences. The dynamic speckle is a phenomenon that occurs when a beam of coherent light illuminates a sample in which there is some type of activity, not visible, which results in a variable pattern over time. In this particular case the method is applied to the evaluation of bacterial chemotaxis. Image stacks provided by a set of experiments are processed to extract features of the intensity dynamics. A Self-Organizing Map is trained and its cells are colored according to a criterion of similarity. During the recall stage the features of patterns belonging to a new biospeckle sample impact on the map, generating a new image using the color of the map cells impacted by the sample patterns. It is considered that this method has shown better performance to identify regions of interest than those that use a single descriptor. To test the method a chemotaxis assay experiment was performed, where regions were differentiated according to the bacterial motility within the sample.",
"title": ""
},
{
"docid": "e1c927d7fbe826b741433c99fff868d0",
"text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.",
"title": ""
},
{
"docid": "a48622ff46323acf1c40345d3e61b636",
"text": "In this paper we present a novel dataset for a critical aspect of autonomous driving, the joint attention that must occur between drivers and of pedestrians, cyclists or other drivers. This dataset is produced with the intention of demonstrating the behavioral variability of traffic participants. We also show how visual complexity of the behaviors and scene understanding is affected by various factors such as different weather conditions, geographical locations, traffic and demographics of the people involved. The ground truth data conveys information regarding the location of participants (bounding boxes), the physical conditions (e.g. lighting and speed) and the behavior of the parties involved.",
"title": ""
},
{
"docid": "d2e6aa2ab48cdd1907f3f373e0627fa8",
"text": "We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in [17] on CIFAR-10 show encouraging results.",
"title": ""
},
{
"docid": "d53726710ce73fbcf903a1537f149419",
"text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.",
"title": ""
},
{
"docid": "007f741a718d0c4a4f181676a39ed54a",
"text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.",
"title": ""
}
] |
scidocsrr
|
62987e20e97911c7286ff5be3aae3f28
|
Learning to Train a Binary Neural Network
|
[
{
"docid": "40a87654ac33c46f948204fd5c7ef4c1",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
}
] |
[
{
"docid": "571009136d227f8df3b8caa125322b61",
"text": "Need an excellent electronic book? fuzzy graphs and fuzzy hypergraphs by , the most effective one! Wan na get it? Discover this outstanding e-book by here currently. Download and install or review online is readily available. Why we are the most effective website for downloading this fuzzy graphs and fuzzy hypergraphs Of course, you could select guide in numerous file types and media. Look for ppt, txt, pdf, word, rar, zip, and also kindle? Why not? Get them below, currently!",
"title": ""
},
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f",
"text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.",
"title": ""
},
{
"docid": "e13b4b92c639a5b697356466e00e05c3",
"text": "In fashion retailing, the display of product inventory at the store is important to capture consumers’ attention. Higher inventory levels might allow more attractive displays and thus increase sales, in addition to avoiding stock-outs. We develop a choice model where product demand is indeed affected by inventory, and controls for product and store heterogeneity, seasonality, promotions and potential unobservable shocks in each market. We empirically test the model with daily traffic, inventory and sales data from a large retailer, at the store-day-product level. We find that the impact of inventory level on sales is positive and highly significant, even in situations of extremely high service level. The magnitude of this effect is large: each 1% increase in product-level inventory at the store increases sales of 0.58% on average. This supports the idea that inventory has a strong role in helping customers choose a particular product within the assortment. We finally describe how a retailer should optimally decide its inventory levels within a category and describe the properties of the optimal solution. Applying such optimization to our data set yields consistent and significant revenue improvements, of more than 10% for any date and store compared to current practices. Submitted: April 6, 2016. Revised: May 17, 2017",
"title": ""
},
{
"docid": "838a79ec0376a23ac24a462a00d140dc",
"text": "Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm’s input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou ’15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley’s inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability.",
"title": ""
},
{
"docid": "aaf81989a3d1081baff7aea34b0b97f1",
"text": "Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory---the optimal co-clustering maximizes the mutual information between the clustered random variables subject to constraints on the number of row and column clusters. We present an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages. Using the practical example of simultaneous word-document clustering, we demonstrate that our algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.",
"title": ""
},
{
"docid": "2b30506690acbae9240ef867e961bc6c",
"text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.",
"title": ""
},
{
"docid": "5ed1f4c5f554a29de926f6d4980cda89",
"text": "Capsule Networks (CapsNet) are recently proposed multi-stage computational models specialized for entity representation and discovery in image data. CapsNet employs iterative routing that shapes how the information cascades through different levels of interpretations. In this work, we investigate i) how the routing affects the CapsNet model fitting, ii) how the representation by capsules helps discover global structures in data distribution and iii) how learned data representation adapts and generalizes to new tasks. Our investigation shows: i) routing operation determines the certainty with which one layer of capsules pass information to the layer above, and the appropriate level of certainty is related to the model fitness, ii) in a designed experiment using data with a known 2D structure, capsule representations allow more meaningful 2D manifold embedding than neurons in a standard CNN do and iii) compared to neurons of standard CNN, capsules of successive layers are less coupled and more adaptive to new data distribution.",
"title": ""
},
{
"docid": "631cd44345606641454e9353e071f2c5",
"text": "Microblogs are rich sources of information because they provide platforms for users to share their thoughts, news, information, activities, and so on. Twitter is one of the most popular microblogs. Twitter users often use hashtags to mark specific topics and to link them with related tweets. In this study, we investigate the relationship between the music listening behaviors of Twitter users and a popular music ranking service by comparing information extracted from tweets with music-related hashtags and the Billboard chart. We collect users' music listening behavior from Twitter using music-related hashtags (e.g., #nowplaying). We then build a predictive model to forecast the Billboard rankings and hit music. The results show that the numbers of daily tweets about a specific song and artist can be effectively used to predict Billboard rankings and hits. This research suggests that users' music listening behavior on Twitter is highly correlated with general music trends and could play an important role in understanding consumers' music consumption patterns. In addition, we believe that Twitter users' music listening behavior can be applied in the field of Music Information Retrieval (MIR).",
"title": ""
},
{
"docid": "d3214d24911a5e42855fd1a53516d30b",
"text": "This paper extends the face detection framework proposed by Viola and Jones 2001 to handle profile views and rotated faces. As in the work of Rowley et al 1998. and Schneiderman et al. 2000, we build different detectors for different views of the face. A decision tree is then trained to determine the viewpoint class (such as right profile or rotated 60 degrees) for a given window of the image being examined. This is similar to the approach of Rowley et al. 1998. The appropriate detector for that viewpoint can then be run instead of running all detectors on all windows. This technique yields good results and maintains the speed advantage of the Viola-Jones detector. Shown as a demo at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 18, 2003 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Publication History:– 1. First printing, TR2003-96, July 2003 Fast Multi-view Face Detection Michael J. Jones Paul Viola mjones@merl.com viola@microsoft.com Mitsubishi Electric Research Laboratory Microsoft Research 201 Broadway One Microsoft Way Cambridge, MA 02139 Redmond, WA 98052",
"title": ""
},
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "1ec1fc8aabb8f7880bfa970ccbc45913",
"text": "Several isolates of Gram-positive, acidophilic, moderately thermophilic, ferrous-iron- and mineral-sulphide-oxidizing bacteria were examined to establish unequivocally the characteristics of Sulfobacillus-like bacteria. Two species were evident: Sulfobacillus thermosulfidooxidans with 48-50 mol% G+C and Sulfobacillus acidophilus sp. nov. with 55-57 mol% G+C. Both species grew autotrophically and mixotrophically on ferrous iron, on elemental sulphur in the presence of yeast extract, and heterotrophically on yeast extract. Autotrophic growth on sulphur was consistently obtained only with S. acidophilus.",
"title": ""
},
{
"docid": "a8aa8c24c794bc6187257d264e2586a0",
"text": "Bayesian optimization is a powerful framework for minimizing expensive objective functions while using very few function evaluations. It has been successfully applied to a variety of problems, including hyperparameter tuning and experimental design. However, this framework has not been extended to the inequality-constrained optimization setting, particularly the setting in which evaluating feasibility is just as expensive as evaluating the objective. Here we present constrained Bayesian optimization, which places a prior distribution on both the objective and the constraint functions. We evaluate our method on simulated and real data, demonstrating that constrained Bayesian optimization can quickly find optimal and feasible points, even when small feasible regions cause standard methods to fail.",
"title": ""
},
{
"docid": "dbcef163643232313207cd45402158de",
"text": "Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity.",
"title": ""
},
{
"docid": "03aec14861b2b1b4e6f091dc77913a5b",
"text": "Taxonomy is indispensable in understanding natural language. A variety of large scale, usage-based, data-driven lexical taxonomies have been constructed in recent years. Hypernym-hyponym relationship, which is considered as the backbone of lexical taxonomies can not only be used to categorize the data but also enables generalization. In particular, we focus on one of the most prominent properties of the hypernym-hyponym relationship, namely, transitivity, which has a significant implication for many applications. We show that, unlike human crafted ontologies and taxonomies, transitivity does not always hold in data-driven lexical taxonomies. We introduce a supervised approach to detect whether transitivity holds for any given pair of hypernym-hyponym relationships. Besides solving the inferencing problem, we also use the transitivity to derive new hypernym-hyponym relationships for data-driven lexical taxonomies. We conduct extensive experiments to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "0bbfd07d0686fc563f156d75d3672c7b",
"text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.",
"title": ""
},
{
"docid": "719458301e92f1c5141971ea8a21342b",
"text": "In the 65 years since its formal specification, information theory has become an established statistical paradigm, providing powerful tools for quantifying probabilistic relationships. Behavior analysis has begun to adopt these tools as a novel means of measuring the interrelations between behavior, stimuli, and contingent outcomes. This approach holds great promise for making more precise determinations about the causes of behavior and the forms in which conditioning may be encoded by organisms. In addition to providing an introduction to the basics of information theory, we review some of the ways that information theory has informed the studies of Pavlovian conditioning, operant conditioning, and behavioral neuroscience. In addition to enriching each of these empirical domains, information theory has the potential to act as a common statistical framework by which results from different domains may be integrated, compared, and ultimately unified.",
"title": ""
},
{
"docid": "72d0731d0fc4f32b116afa207c9aefdd",
"text": "Internet of Things (IoT) is based on a wireless network that connects a huge number of smart objects, products, smart devices, and people. It has another name which is Web of Things (WoT). IoT uses standards and protocols that are proposed by different standardization organizations in message passing within session layer. Most of the IoT applications protocols use TCP or UDP for transport. XMPP, CoAP, DDS, MQTT, and AMQP are grouped of the widely used application protocols. Each one of these protocols have specific functions and are used in specific way to handle some issues. This paper provides an overview for one of the most popular application layer protocols that is MQTT, including its architecture, message format, MQTT scope, and Quality of Service (QoS) for the MQTT levels. MQTT works mainly as a pipe for binary data and provides a flexibility in communication patterns. It is designed to provide a publish-subscribe messaging protocol with most possible minimal bandwidth requirements. MQTT uses Transmission Control Protocol (TCP) for transport. MQTT is an open standard, giving a mechanisms to asynchronous communication, have a range of implementations, and it is working on IP.",
"title": ""
},
{
"docid": "42d3f666325c3c9e2d61fcbad3c6659a",
"text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.",
"title": ""
},
{
"docid": "8a73a42bed30751cbb6798398b81571d",
"text": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.",
"title": ""
}
] |
scidocsrr
|
ff57c158d0058d8f5b16f4049ec0210d
|
Supply Chain Contracting Under Competition : Bilateral Bargaining vs . Stackelberg
|
[
{
"docid": "6559d77de48d153153ce77b0e2969793",
"text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.",
"title": ""
}
] |
[
{
"docid": "d0c5d24a5f68eb5448b45feeca098b87",
"text": "Age estimation has wide applications in video surveillance, social networking, and human-computer interaction. Many of the published approaches simply treat age estimation as an exact age regression problem, and thus do not leverage a distribution's robustness in representing labels with ambiguity such as ages. In this paper, we propose a new loss function, called mean-variance loss, for robust age estimation via distribution learning. Specifically, the mean-variance loss consists of a mean loss, which penalizes difference between the mean of the estimated age distribution and the ground-truth age, and a variance loss, which penalizes the variance of the estimated age distribution to ensure a concentrated distribution. The proposed mean-variance loss and softmax loss are jointly embedded into Convolutional Neural Networks (CNNs) for age estimation. Experimental results on the FG-NET, MORPH Album II, CLAP2016, and AADB databases show that the proposed approach outperforms the state-of-the-art age estimation methods by a large margin, and generalizes well to image aesthetics assessment.",
"title": ""
},
{
"docid": "211b858db72c962efaedf66f2ed9479d",
"text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.",
"title": ""
},
{
"docid": "0241cef84d46b942ee32fc7345874b90",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "f3f4cb6e7e33f54fca58c14ce82d6b46",
"text": "In this letter, a novel slot array antenna with a substrate-integrated coaxial line (SICL) technique is proposed. The proposed antenna has radiation slots etched homolaterally along the mean line in the top metallic layer of SICL and achieves a compact transverse dimension. A prototype with 5 <inline-formula><tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> 10 longitudinal slots is designed and fabricated with a multilayer liquid crystal polymer (LCP) process. A maximum gain of 15.0 dBi is measured at 35.25 GHz with sidelobe levels of <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 28.2 dB (<italic>E</italic>-plane) and <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 33.1 dB (<italic>H</italic>-plane). The close correspondence between experimental results and designed predictions on radiation patterns has validated the proposed excogitation in the end.",
"title": ""
},
{
"docid": "dea6ad0e1985260dbe7b70cef1c5da54",
"text": "The commonest mitochondrial diseases are probably those impairing the function of complex I of the respiratory electron transport chain. Such complex I impairment may contribute to various neurodegenerative disorders e.g. Parkinson's disease. In the following, using hepatocytes as a model cell, we have shown for the first time that the cytotoxicity caused by complex I inhibition by rotenone but not that caused by complex III inhibition by antimycin can be prevented by coenzyme Q (CoQ1) or menadione. Furthermore, complex I inhibitor cytotoxicity was associated with the collapse of the mitochondrial membrane potential and reactive oxygen species (ROS) formation. ROS scavengers or inhibitors of the mitochondrial permeability transition prevented cytotoxicity. The CoQ1 cytoprotective mechanism required CoQ1 reduction by DT-diaphorase (NQO1). Furthermore, the mitochondrial membrane potential and ATP levels were restored at low CoQ1 concentrations (5 microM). This suggests that the CoQ1H2 formed by NQO1 reduced complex III and acted as an electron bypass of the rotenone block. However cytoprotection still occurred at higher CoQ1 concentrations (>10 microM), which were less effective at restoring ATP levels but readily restored the cellular cytosolic redox potential (i.e. lactate: pyruvate ratio) and prevented ROS formation. This suggests that CoQ1 or menadione cytoprotection also involves the NQO1 catalysed reoxidation of NADH that accumulates as a result of complex I inhibition. The CoQ1H2 formed would then also act as a ROS scavenger.",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "793d41551a918a113f52481ff3df087e",
"text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.",
"title": ""
},
{
"docid": "ba75caedb1c9e65f14c2764157682bdf",
"text": "Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose WeaklySupervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object’s discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5ca75490c015685a1fc670b2ee5103ff",
"text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.",
"title": ""
},
{
"docid": "c3b6d46a9e1490c720056682328586d5",
"text": "BACKGROUND\nBirth preparedness and complication preparedness (BPACR) is a key component of globally accepted safe motherhood programs, which helps ensure women to reach professional delivery care when labor begins and to reduce delays that occur when mothers in labor experience obstetric complications.\n\n\nOBJECTIVE\nThis study was conducted to assess practice and factors associated with BPACR among pregnant women in Aleta Wondo district in Sidama Zone, South Ethiopia.\n\n\nMETHODS\nA community based cross sectional study was conducted in 2007, on a sample of 812 pregnant women. Data were collected using pre-tested and structured questionnaire. The collected data were analyzed by SPSS for windows version 12.0.1. The women were asked whether they followed the desired five steps while pregnant: identified a trained birth attendant, identified a health facility, arranged for transport, identified blood donor and saved money for emergency. Taking at least two steps was considered being well-prepared.\n\n\nRESULTS\nAmong 743 pregnant women only a quarter (20.5%) of pregnant women identified skilled provider. Only 8.1% identified health facility for delivery and/or for obstetric emergencies. Preparedness for transportation was found to be very low (7.7%). Considerable (34.5%) number of families saved money for incurred costs of delivery and emergency if needed. Only few (2.3%) identified potential blood donor in case of emergency. Majority (87.9%) of the respondents reported that they intended to deliver at home, and only 60(8%) planned to deliver at health facilities. Overall only 17% of pregnant women were well prepared. The adjusted multivariate model showed that significant predictors for being well-prepared were maternal availing of antenatal services (OR = 1.91 95% CI; 1.21-3.01) and being pregnant for the first time (OR = 6.82, 95% CI; 1.27-36.55).\n\n\nCONCLUSION\nBPACR practice in the study area was found to be low. Effort to increase BPACR should focus on availing antenatal care services.",
"title": ""
},
{
"docid": "d8b2294b650274fc0269545296504432",
"text": "The multidisciplinary nature of information privacy research poses great challenges, since many concepts of information privacy have only been considered and developed through the lens of a particular discipline. It was our goal to conduct a multidisciplinary literature review. Following the three-stage approach proposed by Webster and Watson (2002), our methodology for identifying information privacy publications proceeded in three stages.",
"title": ""
},
{
"docid": "52ebf28afd8ae56816fb81c19e8890b6",
"text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?",
"title": ""
},
{
"docid": "40479536efec6311cd735f2bd34605d7",
"text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.",
"title": ""
},
{
"docid": "68d8834770c34450adc96ed96299ae48",
"text": "This thesis presents a current-mode CMOS image sensor using lateral bipolar phototransistors (LPTs). The objective of this design is to improve the photosensitivity of the image sensor, and to provide photocurrent amplification at the circuit level. Lateral bipolar phototransistors can be implemented using a standard CMOS technology with no process modification. Under illumination, photogenerated carriers contribute to the base current, and the output emitter current is amplified through the transistor action of the bipolar device. Our analysis and simulation results suggest that the LPT output characteristics are strongly dependent on process parameters including base and emitter doping concentrations, as well as the device geometry such as the base width. For high current gain, a minimized base width is desired. The 2D effect of current crowding has also been discussed. Photocurrent can be further increased using amplifying current mirrors in the pixel and column structures. A prototype image sensor has been designed and fabricated in a standard 0.18μm CMOS technology. This design includes a photodiode image array and a LPT image array, each 70× 48 in dimension. For both arrays, amplifying current mirrors are included in the pixel readout structure and at the column level. Test results show improvements in both photosensitivity and conversion efficiency. The LPT also exhibits a better spectral response in the red region of the spectrum, because of the nwell/p-substrate depletion region. On the other hand, dark current, fixed pattern noise (FPN), and power consumption also increase due to current amplification. This thesis has demonstrated that the use of lateral bipolar phototransistors and amplifying current mirrors can help to overcome low photosensitivity and other deterioration imposed by technology scaling. The current-mode readout scheme with LPT-based photodetectors can be used as a front end to additional image processing circuits.",
"title": ""
},
{
"docid": "335220bbad7798a19403d393bcbbf7fb",
"text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.",
"title": ""
},
{
"docid": "d8eab1f244bd5f9e05eb706bb814d299",
"text": "Private participation in road projects is increasing around the world. The most popular franchising mechanism is a concession contract, which allows a private firm to charge tolls to road users during a pre-determined period in order to recover its investments. Concessionaires are usually selected through auctions at which candidates submit bids for tolls, payments to the government, or minimum term to hold the contract. This paper discusses, in the context of road franchising, how this mechanism does not generally yield optimal outcomes and it induces the frequent contract renegotiations observed in road projects. A new franchising mechanism is proposed, based on flexible-term contracts and auctions with bids for total net revenue and maintenance costs. This new mechanism improves outcomes compared to fixed-term concessions, by eliminating traffic risk and promoting the selection of efficient concessionaires.",
"title": ""
},
{
"docid": "155de33977b33d2f785fd86af0aa334f",
"text": "Model-based analysis tools, built on assumptions and simplifications, are difficult to handle smart grids with data characterized by volume, velocity, variety, and veracity (i.e., 4Vs data). This paper, using random matrix theory (RMT), motivates data-driven tools to perceive the complex grids in high-dimension; meanwhile, an architecture with detailed procedures is proposed. In algorithm perspective, the architecture performs a high-dimensional analysis and compares the findings with RMT predictions to conduct anomaly detections. Mean spectral radius (MSR), as a statistical indicator, is defined to reflect the correlations of system data in different dimensions. In management mode perspective, a group-work mode is discussed for smart grids operation. This mode breaks through regional limitations for energy flows and data flows, and makes advanced big data analyses possible. For a specific large-scale zone-dividing system with multiple connected utilities, each site, operating under the group-work mode, is able to work out the regional MSR only with its own measured/simulated data. The large-scale interconnected system, in this way, is naturally decoupled from statistical parameters perspective, rather than from engineering models perspective. Furthermore, a comparative analysis of these distributed MSRs, even with imperceptible different raw data, will produce a contour line to detect the event and locate the source. It demonstrates that the architecture is compatible with the block calculation only using the regional small database; beyond that, this architecture, as a data-driven solution, is sensitive to system situation awareness, and practical for real large-scale interconnected systems. Five case studies and their visualizations validate the designed architecture in various fields of power systems. To our best knowledge, this paper is the first attempt to apply big data technology into smart grids.",
"title": ""
},
{
"docid": "e75f830b902ca7d0e8d9e9fa03a62440",
"text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.",
"title": ""
},
{
"docid": "f96098449988c433fe8af20be0c468a5",
"text": "Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.",
"title": ""
},
{
"docid": "546296aecaee9963ee7495c9fbf76fd4",
"text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.",
"title": ""
}
] |
scidocsrr
|
04836cd980c5022b30d361d29baf4097
|
A wearable system that knows who wears it
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "b7aca26bc09bbc9376fefd1befec2b28",
"text": "Wearable sensor systems have been used in the ubiquitous computing community and elsewhere for applications such as activity and gesture recognition, health and wellness monitoring, and elder care. Although the power consumption of accelerometers has already been highly optimized, this work introduces a novel sensing approach which lowers the power requirement for motion sensing by orders of magnitude. We present an ultra-low-power method for passively sensing body motion using static electric fields by measuring the voltage at any single location on the body. We present the feasibility of using this sensing approach to infer the amount and type of body motion anywhere on the body and demonstrate an ultra-low-power motion detector used to wake up more power-hungry sensors. The sensing hardware consumes only 3.3 μW, and wake-up detection is done using an additional 3.3 μW (6.6 μW total).",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] |
[
{
"docid": "19a47559acfc6ee0ebb0c8e224090e28",
"text": "Learning from streams of evolving and unbounded data is an important problem, for example in visual surveillance or internet scale data. For such large and evolving real-world data, exhaustive supervision is impractical, particularly so when the full space of classes is not known in advance therefore joint class discovery (exploration) and boundary learning (exploitation) becomes critical. Active learning has shown promise in jointly optimising exploration-exploitation with minimal human supervision. However, existing active learning methods either rely on heuristic multi-criteria weighting or are limited to batch processing. In this paper, we present a new unified framework for joint exploration-exploitation active learning in streams without any heuristic weighting. Extensive evaluation on classification of various image and surveillance video datasets demonstrates the superiority of our framework over existing methods.",
"title": ""
},
{
"docid": "8d2d3b326c246bde95b360c9dcf6540f",
"text": "A field experiment was carried out at the Shenyang Experimental Station of Ecology (CAS) in order to study the effects of slow-release urea fertilizers high polymer-coated urea (SRU1), SRU1 mixed with dicyandiamide DCD (SRU2), and SRU1 mixed with calcium carbide CaC2 (SRU3) on urease activity, microbial biomass C and N, and nematode communities in an aquic brown soil during the maize growth period. The results demonstrated that the application of slow-release urea fertilizers inhibits soil urease activity and increases the soil NH4 +-N content. Soil available N increment could promote its immobilization by microorganisms. Determination of soil microbial biomass N indicated that a combined application of coated urea and nitrification inhibitors increased the soil active N pool. The population of predators/omnivores indicated that treatment with SRU2 could provide enough soil NH4 +-N to promote maize growth and increased the food resource for the soil fauna compared with the other treatments.",
"title": ""
},
{
"docid": "d337f149d3e52079c56731f4f3d8ea3e",
"text": "Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.",
"title": ""
},
{
"docid": "26befbb36d5d64ff0c075b38cde32d6f",
"text": "This study deals with the problems related to the translation of political texts in the theoretical framework elaborated by the researchers working in the field of translation studies and reflects on the terminological peculiarities of the special language used for this text type . Consideration of the theoretical framework is followed by the analysis of a specific text spoken then written in English and translated into Hungarian and Romanian. The conclusions are intended to highlight the fact that there are no recipes for translating a political speech, because translation is not only a technical process that uses translation procedures and applies transfer operations, but also a matter of understanding cultural, historical and political situations and their significance.",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "ddb77ec8a722c50c28059d03919fb299",
"text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.",
"title": ""
},
{
"docid": "cfadfcbc3929b5552119a4f8cb211b33",
"text": "The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is—as we discuss in this paper— well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "9a2b499cf1ed10403a55f2557c00dedf",
"text": "Polar codes are a recently discovered family of capacity-achieving codes that are seen as a major breakthrough in coding theory. Motivated by the recent rapid progress in the theory of polar codes, we propose a semi-parallel architecture for the implementation of successive cancellation decoding. We take advantage of the recursive structure of polar codes to make efficient use of processing resources. The derived architecture has a very low processing complexity while the memory complexity remains similar to that of previous architectures. This drastic reduction in processing complexity allows very large polar code decoders to be implemented in hardware. An N=217 polar code successive cancellation decoder is implemented in an FPGA. We also report synthesis results for ASIC.",
"title": ""
},
{
"docid": "9def5ba1b4b262b8eb71123023c00e36",
"text": "OBJECTIVE\nThe primary objective of this study was to compare clinically and radiographically the efficacy of autologous platelet rich fibrin (PRF) and autogenous bone graft (ABG) obtained using bone scrapper in the treatment of intrabony periodontal defects.\n\n\nMATERIALS AND METHODS\nThirty-eight intrabony defects (IBDs) were treated with either open flap debridement (OFD) with PRF or OFD with ABG. Clinical parameters were recorded at baseline and 6 months postoperatively. The defect-fill and defect resolution at baseline and 6 months were calculated radiographically (intraoral periapical radiographs [IOPA] and orthopantomogram [OPG]).\n\n\nRESULTS\nSignificant probing pocket depth (PPD) reduction, clinical attachment level (CAL) gain, defect fill and defect resolution at both PRF and ABG treated sites with OFD was observed. However, inter-group comparison was non-significant (P > 0.05). The bivariate correlation results revealed that any of the two radiographic techniques (IOPA and OPG) can be used for analysis of the regenerative therapy in IBDs.\n\n\nCONCLUSION\nThe use of either PRF or ABG were effective in the treatment of three wall IBDs with an uneventful healing of the sites.",
"title": ""
},
{
"docid": "b4910e355c44077eb27c62a0c8237204",
"text": "Our proof is built on Perron-Frobenius theorem, a seminal work in matrix theory (Meyer 2000). By Perron-Frobenius theorem, the power iteration algorithm for predicting top K persuaders converges to a unique C and this convergence is independent of the initialization of C if the persuasion probability matrix P is nonnegative, irreducible, and aperiodic (Heath 2002). We first show that P is nonnegative. Each component of the right hand side of Equation (10) is positive except nD $ 0; thus, persuasion probability pij estimated with Equation (10) is positive, for all i, j = 1, 2, ..., n and i ... j. Because all diagonal elements of P are equal to zero and all non-diagonal elements of P are positive persuasion probabilities, P is nonnegative.",
"title": ""
},
{
"docid": "d972e23eb49c15488d2159a9137efb07",
"text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "4a0756bffc50e11a0bcc2ab88502e1a2",
"text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
},
{
"docid": "1dc07b02a70821fdbaa9911755d1e4b0",
"text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.",
"title": ""
},
{
"docid": "ae0ef7702fca274bd4ee8a2a30479275",
"text": "This paper describes the drawbacks related to the iron in the classical electrodynamic loudspeaker structure. Then it describes loudspeaker motors without any iron, which are only made of permanent magnets. They are associated to a piston like moving part which glides on ferrofluid seals. Furthermore, the coil is short and the suspension is wholly pneumatic. Several types of magnet assemblies are described and discussed. Indeed, their properties regarding the force factor and the ferrofluid seal shape depend on their structure. Eventually, the capacity of the seals is evaluated.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "6fd9793e9f44b726028f8c879157f1f7",
"text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.",
"title": ""
},
{
"docid": "cb19facb61dae863c566f5fafd9f8b20",
"text": "This paper describes our solution for the 2 YouTube-8M video understanding challenge organized by Google AI. Unlike the video recognition benchmarks, such as Kinetics and Moments, the YouTube8M challenge provides pre-extracted visual and audio features instead of raw videos. In this challenge, the submitted model is restricted to 1GB, which encourages participants focus on constructing one powerful single model rather than incorporating of the results from a bunch of models. Our system fuses six different sub-models into one single computational graph, which are categorized into three families. More specifically, the most effective family is the model with non-local operations following the NetVLAD encoding. The other two family models are Soft-BoF and GRU, respectively. In order to further boost single models performance, the model parameters of different checkpoints are averaged. Experimental results demonstrate that our proposed system can effectively perform the video classification task, achieving 0.88763 on the public test set and 0.88704 on the private set in terms of GAP@20, respectively. We finally ranked at the fourth place in the YouTube-8M video understanding challenge.",
"title": ""
}
] |
scidocsrr
|
7e7272379f6c262e43cf408524551964
|
Steady-State Mean-Square Error Analysis for Adaptive Filtering under the Maximum Correntropy Criterion
|
[
{
"docid": "7a7e0363ca4ad5c83a571449f53834ca",
"text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.",
"title": ""
}
] |
[
{
"docid": "a14ac26274448e0a7ecafdecae4830f9",
"text": "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.",
"title": ""
},
{
"docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77",
"text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.",
"title": ""
},
{
"docid": "315af705427ee4363fe4614dc72eb7a7",
"text": "The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.",
"title": ""
},
{
"docid": "5006770c9f7a6fb171a060ad3d444095",
"text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.",
"title": ""
},
{
"docid": "7d84e574d2a6349a9fc2669fdbe08bba",
"text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.",
"title": ""
},
{
"docid": "6838cf1310f0321cd524bb1120f35057",
"text": "One of the most compelling visions of future robots is that of the robot butler. An entity dedicated to fulfilling your every need. This obviously has its benefits, but there could be a flipside to this vision. To fulfill the needs of its users, it must first be aware of them, and so it could potentially amass a huge amount of personal data regarding its user, data which may or may not be safe from accidental or intentional disclosure to a third party. How may prospective owners of a personal robot feel about the data that might be collected about them? In order to investigate this issue experimentally, we conducted an exploratory study where 12 participants were exposed to an HRI scenario in which disclosure of personal information became an issue. Despite the small sample size interesting results emerged from this study, indicating how future owners of personal robots feel regarding what the robot will know about them, and what safeguards they believe should be in place to protect owners from unwanted disclosure of private information.",
"title": ""
},
{
"docid": "8f978ac84eea44a593e9f18a4314342c",
"text": "There is clear evidence that interpersonal social support impacts stress levels and, in turn, degree of physical illness and psychological well-being. This study examines whether mediated social networks serve the same palliative function. A survey of 401 undergraduate Facebook users revealed that, as predicted, number of Facebook friends associated with stronger perceptions of social support, which in turn associated with reduced stress, and in turn less physical illness and greater well-being. This effect was minimized when interpersonal network size was taken into consideration. However, for those who have experienced many objective life stressors, the number of Facebook friends emerged as the stronger predictor of perceived social support. The \"more-friends-the-better\" heuristic is proposed as the most likely explanation for these findings.",
"title": ""
},
{
"docid": "4dc302fc2001dda1d24d830bb43f9cfa",
"text": "Discussions of qualitative research interviews have centered on promoting an ideal interactional style and articulating the researcher behaviors by which this might be realized. Although examining what researchers do in an interview continues to be valuable, this focus obscures the reflexive engagement of all participants in the exchange and the potential for a variety of possible styles of interacting. The author presents her analyses of participants’ accounts of past research interviews and explores the implications of this for researchers’ orientation to qualitative research inter-",
"title": ""
},
{
"docid": "2031114bd1dc1a3ca94bdd8a13ad3a86",
"text": "Crude extracts of curcuminoids and essential oil of Curcuma longa varieties Kasur, Faisalabad and Bannu were studied for their antibacterial activity against 4 bacterial strains viz., Bacillus subtilis, Bacillus macerans, Bacillus licheniformis and Azotobacter using agar well diffusion method. Solvents used to determine antibacterial activity were ethanol and methanol. Ethanol was used for the extraction of curcuminoids. Essential oil was extracted by hydrodistillation and diluted in methanol by serial dilution method. Both Curcuminoids and oil showed zone of inhibition against all tested strains of bacteria. Among all the three turmeric varieties, Kasur variety had the most inhibitory effect on the growth of all bacterial strains tested as compared to Faisalabad and Bannu varieties. Among all the bacterial strains B. subtilis was the most sensitive to turmeric extracts of curcuminoids and oil. The MIC value for different strains and varieties ranged from 3.0 to 20.6 mm in diameter.",
"title": ""
},
{
"docid": "1b802879e554140e677020e379b866c1",
"text": "This study investigated vertical versus shared leadership as predictors of the effectiveness of 71 change management teams. Vertical leadership stems from an appointed or formal leader of a team, whereas shared leadership (C. L. Pearce, 1997; C. L. Pearce & J. A. Conger, in press; C. L. Pearce & H. P. Sims, 2000) is a group process in which leadership is distributed among, and stems from, team members. Team effectiveness was measured approximately 6 months after the assessment of leadership and was also measured from the viewpoints of managers, internal customers, and team members. Using multiple regression, the authors found both vertical and shared leadership to be significantly related to team effectiveness ( p .05), although shared leadership appears to be a more useful predictor of team effectiveness than vertical leadership.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "4463a242a313f82527c4bdfff3d3c13c",
"text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "9bb88b82789d43e48b1e8a10701d39bd",
"text": "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many artificial intelligence–related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. In this article, we review several popular deep learning models, including deep belief networks and deep Boltzmann machines. We show that (a) these deep generative models, which contain many layers of latent variables and millions of parameters, can be learned efficiently, and (b) the learned high-level feature representations can be successfully applied in many application domains, including visual object recognition, information retrieval, classification, and regression tasks.",
"title": ""
},
{
"docid": "584e84ac1a061f1bf7945ab4cf54d950",
"text": "Paul White, PhD, MD§ Acupuncture has been used in China and other Asian countries for the past 3000 yr. Recently, this technique has been gaining increased popularity among physicians and patients in the United States. Even though acupuncture-induced analgesia is being used in many pain management programs in the United States, the mechanism of action remains unclear. Studies suggest that acupuncture and related techniques trigger a sequence of events that include the release of neurotransmitters, endogenous opioid-like substances, and activation of c-fos within the central nervous system. Recent developments in central nervous system imaging techniques allow scientists to better evaluate the chain of events that occur after acupuncture-induced stimulation. In this review article we examine current biophysiological and imaging studies that explore the mechanisms of acupuncture analgesia.",
"title": ""
},
{
"docid": "fce8f5ee730e2bbb63f4d1ef003ce830",
"text": "In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. Using a linear decision rule, we also propose a tractable approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. An attractive feature of the framework is that we convert the original model into a second-order cone program, which is computationally tractable both in theory and in practice. We demonstrate the framework through an application of a project management problem with uncertain activity completion time.",
"title": ""
},
{
"docid": "3573fb077b151af3c83f7cd6a421dc9c",
"text": "Let G = (V, E) be a directed graph with a distinguished source vertex s. The single-source path expression problem is to find, for each vertex v, a regular expression P(s, v) which represents the set of all paths in G from s to v A solution to this problem can be used to solve shortest path problems, solve sparse systems of linear equations, and carry out global flow analysis. A method is described for computing path expressions by dwidmg G mto components, computing path expressions on the components by Gaussian elimination, and combining the solutions This method requires O(ma(m, n)) time on a reducible flow graph, where n Is the number of vertices m G, m is the number of edges in G, and a is a functional inverse of Ackermann's function The method makes use of an algonthm for evaluating functions defined on paths in trees. A smapllfied version of the algorithm, which runs in O(m log n) time on reducible flow graphs, is quite easy to implement and efficient m practice",
"title": ""
},
{
"docid": "b8e921733ef4ab77abcb48b0a1f04dbb",
"text": "This paper demonstrates the efficiency of kinematic redundancy used to increase the useable workspace of planar parallel mechanisms. As examples, we propose kinematically redundant schemes of the well known planar 3RRR and 3RPR mechanisms denoted as 3(P)RRR and 3(P)RPR. In both cases, a prismatic actuator is added allowing a usually fixed base joint to move linearly. Hence, reconfigurations can be performed selectively in order to avoid singularities and to affect the mechanisms' performance directly. Using an interval-based method the useable workspace, i.e. the singularity-free workspace guaranteeing a desired performance, is obtained. Due to the interval analysis any uncertainties can be implemented within the algorithm leading to practical and realistic results. It is shown that due to the additional prismatic actuator the useable workspace increases significantly. Several analysis examples clarify the efficiency of the proposed kinematically redundant mechanisms.",
"title": ""
},
{
"docid": "ba10de4e7613307d08b46cf001cbeb3b",
"text": "This paper builds on a general typology of textual communication (Aarseth 1997) and tries to establish a model for classifying the genre of “games in virtual environments”— that is, games that take place in some kind of simulated world, as opposed to purely abstract games like poker or blackjack. The aim of the model is to identify the main differences between games in a rigorous, analytical way, in order to come up with genres that are more specific and less ad hoc than those used by the industry and the popular gaming press. The model consists of a number of basic “dimensions”, such as Space, Perspective, Time, Teleology, etc, each of which has several variate values, (e.g. Teleology: finite (Half-Life) or infinite (EverQuest. Ideally, the multivariate model can be used to predict games that do not yet exist, but could be invented by combining the existing elements in new ways.",
"title": ""
},
{
"docid": "8188bcd3b95952dbf2818cad6fc2c36c",
"text": "Semi-supervised learning is by no means an unfamiliar concept to natural language processing researchers. Labeled data has been used to improve unsupervised parameter estimation procedures such as the EM algorithm and its variants since the beginning of the statistical revolution in NLP (e.g., Pereira and Schabes (1992)). Unlabeled data has also been used to improve supervised learning procedures, the most notable examples being the successful applications of self-training and co-training to word sense disambiguation (Yarowsky 1995) and named entity classification (Collins and Singer 1999). Despite its increasing importance, semi-supervised learning is not a topic that is typically discussed in introductory machine learning texts (e.g., Mitchell (1997), Alpaydin (2004)) or NLP texts (e.g., Manning and Schütze (1999), Jurafsky andMartin (2000)). Consequently, to learn about semi-supervised learning research, one has to consult the machine-learning literature. This can be a daunting task for NLP researchers who have little background in machine learning. Steven Abney’s book Semisupervised Learning for Computational Linguistics is targeted precisely at such researchers, aiming to provide them with a “broad and accessible presentation” of topics in semi-supervised learning. According to the preamble, the reader is assumed to have taken only an introductory course in NLP “that include statistical methods — concretely the material contained in Jurafsky andMartin (2000) andManning and Schütze (1999).”Nonetheless, I agreewith the author that any NLP researcher who has a solid background in machine learning is ready to “tackle the primary literature on semisupervised learning, and will probably not find this book particularly useful” (page 11). As the author promises, the book is self-contained and quite accessible to those who have little background in machine learning. In particular, of the 12 chapters in the book, three are devoted to preparatory material, including: a brief introduction to machine learning, basic unconstrained and constrained optimization techniques (e.g., gradient descent and the method of Lagrange multipliers), and relevant linear-algebra concepts (e.g., eigenvalues, eigenvectors, matrix and vector norms, diagonalization). The remaining chapters focus roughly on six types of semi-supervised learning methods:",
"title": ""
}
] |
scidocsrr
|
80e9309b3e9bb8f29e81d26f3cb8606b
|
The Incredible ELK
|
[
{
"docid": "87af3cf22afaf5903a521e653f693e6c",
"text": "Finding the justifications of an entailment (that is, all the minimal set of axioms sufficient to produce an entailment) has emerged as a key inference service for the Web Ontology Language (OWL). Justifications are essential for debugging unsatisfiable classes and contradictions. The availability of justifications as explanations of entailments improves the understandability of large and complex ontologies. In this paper, we present several algorithms for computing all the justifications of an entailment in an OWL-DL Ontology and show, by an empirical evaluation, that even a reasoner independent approach works well on real ontologies.",
"title": ""
},
{
"docid": "9814af3a2c855717806ad7496d21f40e",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
}
] |
[
{
"docid": "69a11f89a92051631e1c07f2af475843",
"text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.",
"title": ""
},
{
"docid": "115fb4dcd7d5a1240691e430cd107dce",
"text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "b7dcd24f098965ff757b7ce5f183662b",
"text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.",
"title": ""
},
{
"docid": "4cbf8dc762813225048edc555a28a0c4",
"text": "The Semantic Web and Linked Data gained traction in the last years. However, the majority of information still is contained in unstructured documents. This can also not be expected to change, since text, images and videos are the natural way how humans interact with information. Semantic structuring on the other hand enables the (semi-)automatic integration, repurposing, rearrangement of information. NLP technologies and formalisms for the integrated representation of unstructured and semantic content (such as RDFa and Microdata) aim at bridging this semantic gap. However, in order for humans to truly benefit from this integration, we need ways to author, visualize and explore unstructured and semantically enriched content in an integrated manner. In this paper, we present the WYSIWYM (What You See is What You Mean) concept, which addresses this issue and formalizes the binding between semantic representation models and UI elements for authoring, visualizing and exploration. With RDFaCE and Pharmer we present and evaluate two complementary showcases implementing the WYSIWYM concept for different application domains.",
"title": ""
},
{
"docid": "d25a3d1a921d78c4e447c8e010647351",
"text": "In the TREC 2005 Spam Evaluation Track, a number of popular spam filters – all owing their heritage to Graham’s A Plan for Spam – did quite well. Machine learning techniques reported elsewhere to perform well were hardly represented in the participating filters, and not represented at all in the better results. A non-traditional technique Prediction by Partial Matching (PPM) – performed exceptionally well, at or near the top of every test. Are the TREC results an anomaly? Is PPM really the best method for spam filtering? How are these results to be reconciled with others showing that methods like Support Vector Machines (SVM) are superior? We address these issues by testing implementations of five different classification methods on the TREC public corpus using the online evaluation methodology introduced in TREC. These results are complemented with cross validation experiments, which facilitate a comparison of the methods considered in the study under different evaluation schemes, and also give insight into the nature and utility of the evaluation regimens themselves. For comparison with previously published results, we also conducted cross validation experiments on the Ling-Spam and PU1 datasets. These tests reveal substantial differences attributable to different test assumptions, in particular batch vs. on-line training and testing, the order of classification, and the method of tokenization. Notwithstanding these differences, the methods that perform well at TREC also perform well using established test methods and corpora. Two previously untested methods – one based on Dynamic Markov Compression and one using logistic regression – compare favorably with competing approaches.",
"title": ""
},
{
"docid": "1f52a93eff0c020564acc986b2fef0e7",
"text": "The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.",
"title": ""
},
{
"docid": "946517ff7728e321804b36c43e3a0da2",
"text": "We are creating an environment for investigating the role of advanced AI in interactive, story-based computer games. This environment is based on the Unreal Tournament (UT) game engine and the Soar AI engine. Unreal provides a 3D virtual environment, while Soar provides a flexible architecture for developing complex AI characters. This paper describes our progress to date, starting with our game, Haunt 2, which is designed so that complex AI characters will be critical to the success (or failure) of the game. It addresses design issues with constructing a plot for an interactive storytelling environment, creating synthetic characters for that environment, and using a story director agent to tell the story with those characters.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "dd0f335262aab9aa5adb0ad7d25b80bf",
"text": "We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user.",
"title": ""
},
{
"docid": "5546f93f4c10681edb0fdfe3bf52809c",
"text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.",
"title": ""
},
{
"docid": "598fd1fc1d1d6cba7a838c17efe9481b",
"text": "The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by Hindle et al. The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new “lens” for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria.",
"title": ""
},
{
"docid": "f151c89fecb41e10c6b19ceb659eb163",
"text": "Most organizations have some kind of process-oriented information system that keeps track of business events. Process Mining starts from event logs extracted from these systems in order to discover, analyze, diagnose and improve processes, organizational, social and data structures. Notwithstanding the large number of contributions to the process mining literature over the last decade, the number of studies actually demonstrating the applicability and value of these techniques in practice has been limited. As a consequence, there is a need for real-life case studies suggesting methodologies to conduct process mining analysis and to show the benefits of its application in real-life environments. In this paper we present a methodological framework for a multi-faceted analysis of real-life event logs based on Process Mining. As such, we demonstrate the usefulness and flexibility of process mining techniques to expose organizational inefficiencies in a real-life case study that is centered on the back office process of a large Belgian insurance company. Our analysis shows that process mining techniques constitute an ideal means to tackle organizational challenges by suggesting process improvements and creating a companywide process awareness. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f81ea919846bce6bae4298d8780f9123",
"text": "AIMS AND OBJECTIVES\nTo evaluate the effectiveness of an accessibility-enhanced multimedia informational educational programme in reducing anxiety and increasing satisfaction with the information and materials received by patients undergoing cardiac catheterisation.\n\n\nBACKGROUND\nCardiac catheterisation is one of the most anxiety-provoking invasive procedures for patients. However, informational education using multimedia to inform patients undergoing cardiac catheterisation has not been extensively explored.\n\n\nDESIGN\nA randomised experimental design with three-cohort prospective comparisons.\n\n\nMETHODS\nIn total, 123 consecutive patients were randomly assigned to one of three groups: regular education; (group 1), accessibility-enhanced multimedia informational education (group 2) and instructional digital videodisc education (group 3). Anxiety was measured with Spielberger's State Anxiety Inventory, which was administered at four time intervals: before education (T0), immediately after education (T1), before cardiac catheterisation (T2) and one day after cardiac catheterisation (T3). A satisfaction questionnaire was administrated one day after cardiac catheterisation. Data were collected from May 2009-September 2010 and analysed using descriptive statistics, chi-squared tests, one-way analysis of variance, Scheffe's post hoc test and generalised estimating equations.\n\n\nRESULTS\nAll patients experienced moderate anxiety at T0 to low anxiety at T3. Accessibility-enhanced multimedia informational education patients had significantly lower anxiety levels and felt the most satisfied with the information and materials received compared with patients in groups 1 and 3. A statistically significant difference in anxiety levels was only found at T2 among the three groups (p = 0·004).\n\n\nCONCLUSIONS\nThe findings demonstrate that the accessibility-enhanced multimedia informational education was the most effective informational educational module for informing patients about their upcoming cardiac catheterisation, to reduce anxiety and improve satisfaction with the information and materials received compared with the regular education and instructional digital videodisc education.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nAs the accessibility-enhanced multimedia informational education reduced patient anxiety and improved satisfaction with the information and materials received, it can be adapted to complement patient education in future regular cardiac care.",
"title": ""
},
{
"docid": "ac657141ed547f870ad35d8c8b2ba8f5",
"text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.",
"title": ""
},
{
"docid": "cb00162e49af450c3e355088fe7817ac",
"text": "The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.",
"title": ""
},
{
"docid": "f066cb3e2fc5ee543e0cc76919b261eb",
"text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.",
"title": ""
},
{
"docid": "55ec669a67b88ff0b6b88f1fa6408df9",
"text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.",
"title": ""
},
{
"docid": "b06f1e94f0ba22828044030c3a1fe691",
"text": "BACKGROUND\nThe use of opioids for chronic non-cancer pain has increased in the United States since state laws were relaxed in the late 1990s. These policy changes occurred despite scanty scientific evidence that chronic use of opioids was safe and effective.\n\n\nMETHODS\nWe examined opiate prescriptions and dosing patterns (from computerized databases, 1996 to 2002), and accidental poisoning deaths attributable to opioid use (from death certificates, 1995 to 2002), in the Washington State workers' compensation system.\n\n\nRESULTS\nOpioid prescriptions increased only modestly between 1996 and 2002. However, prescriptions for the most potent opioids (Schedule II), as a percentage of all scheduled opioid prescriptions (II, III, and IV), increased from 19.3% in 1996 to 37.2% in 2002. Among long-acting opioids, the average daily morphine equivalent dose increased by 50%, to 132 mg/day. Thirty-two deaths were definitely or probably related to accidental overdose of opioids. The majority of deaths involved men (84%) and smokers (69%).\n\n\nCONCLUSIONS\nThe reasons for escalating doses of the most potent opioids are unknown, but it is possible that tolerance or opioid-induced abnormal pain sensitivity may be occurring in some workers who use opioids for chronic pain. Opioid-related deaths in this population may be preventable through use of prudent guidelines regarding opioid use for chronic pain.",
"title": ""
},
{
"docid": "3668a5a14ea32471bd34a55ff87b45b5",
"text": "This paper proposes a method to separate polyphonic music signal into signals of each musical instrument by NMF: Non-negative Matrix Factorization based on preservation of spectrum envelope. Sound source separation is taken as a fundamental issue in music signal processing and NMF is becoming common to solve it because of its versatility and compatibility with music signal processing. Our method bases on a common feature of harmonic signal: spectrum envelopes of musical signal in close pitches played by the harmonic music instrument would be similar. We estimate power spectrums of each instrument by NMF with restriction to synchronize spectrum envelope of bases which are allocated to all possible center frequencies of each instrument. This manipulation means separation of components which refers to tones of each instrument and realizes both of separation without pre-training and separation of signal including harmonic and non-harmonic sound. We had an experiment to decompose mixture sound signal of MIDI instruments into each instrument and evaluated the result by SNR of single MIDI instrument sound signals and separated signals. As a result, SNR of lead guitar and drums approximately marked 3.6 and 6.0 dB and showed significance of our method.",
"title": ""
},
{
"docid": "12d565f0aaa6960e793b96f1c26cb103",
"text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.",
"title": ""
}
] |
scidocsrr
|
de58318e961209968774fcda1d76bc73
|
Forecasting of ozone concentration in smart city using deep learning
|
[
{
"docid": "961348dd7afbc1802d179256606bdbb8",
"text": "Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics. The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios. The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "4e9b1776436950ed25353a8731eda76a",
"text": "This paper presents the design and implementation of VibeBin, a low-cost, non-intrusive and easy-to-install waste bin level detection system. Recent popularity of Internet-of-Things (IoT) sensors has brought us unprecedented opportunities to enable a variety of new services for monitoring and controlling smart buildings. Indoor waste management is crucial to a healthy environment in smart buildings. Measuring the waste bin fill-level helps building operators schedule garbage collection more responsively and optimize the quantity and location of waste bins. Existing systems focus on directly and intrusively measuring the physical quantities of the garbage (weight, height, volume, etc.) or its appearance (image), and therefore require careful installation, laborious calibration or labeling, and can be costly. Our system indirectly measures fill-level by sensing the changes in motor-induced vibration characteristics on the outside surface of waste bins. VibeBin exploits the physical nature of vibration resonance of the waste bin and the garbage within, and learns the vibration features of different fill-levels through a few garbage collection (emptying) cycles in a completely unsupervised manner. VibeBin identifies vibration features of different fill-levels by clustering historical vibration samples based on a custom distance metric which measures the dissimilarity between two samples. We deploy our system on eight waste bins of different types and sizes, and show that under normal usage and real waste, it can deliver accurate level measurements after just 3 garbage collection cycles. The average F-score (harmonic mean of precision and recall) of measuring empty, half, and full levels achieves 0.912. A two-week deployment also shows that the false positive and false negative events are satisfactorily rare.",
"title": ""
},
{
"docid": "91a56dbdefc08d28ff74883ec10a5d6e",
"text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.",
"title": ""
},
{
"docid": "1c94dec13517bedf7a8140e207e0a6d9",
"text": "Art and anatomy were particularly closely intertwined during the Renaissance period and numerous painters and sculptors expressed themselves in both fields. Among them was Michelangelo Buonarroti (1475-1564), who is renowned for having produced some of the most famous of all works of art, the frescoes on the ceiling and on the wall behind the altar of the Sistine Chapel in Rome. Recently, a unique association was discovered between one of Michelangelo's most celebrated works (The Creation of Adam fresco) and the Divine Proportion/Golden Ratio (GR) (1.6). The GR can be found not only in natural phenomena but also in a variety of human-made objects and works of art. Here, using Image-Pro Plus 6.0 software, we present mathematical evidence that Michelangelo also used the GR when he painted Saint Bartholomew in the fresco of The Last Judgment, which is on the wall behind the altar. This discovery will add a new dimension to understanding the great works of Michelangelo Buonarroti.",
"title": ""
},
{
"docid": "a1f93bedbddefb63cd7ab7d030b4f3ee",
"text": "This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system’s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone’s resources.",
"title": ""
},
{
"docid": "ddb66de70b76427f30fae713f176bc64",
"text": "Identifying whether an utterance is a statement, question, greeting, and so forth is integral to effective automatic understanding of natural dialog. Little is known, however, about how such dialog acts (DAs) can be automatically classified in truly natural conversation. This study asks whether current approaches, which use mainly word information, could be improved by adding prosodic information. The study is based on more than 1000 conversations from the Switchboard corpus. DAs were hand-annotated, and prosodic features (duration, pause, F0, energy, and speaking rate) were automatically extracted for each DA. In training, decision trees based on these features were inferred; trees were then applied to unseen test data to evaluate performance. Performance was evaluated for prosody models alone, and after combining the prosody models with word information--either from true words or from the output of an automatic speech recognizer. For an overall classification task, as well as three subtasks, prosody made significant contributions to classification. Feature-specific analyses further revealed that although canonical features (such as F0 for questions) were important, less obvious features could compensate if canonical features were removed. Finally, in each task, integrating the prosodic model with a DA-specific statistical language model improved performance over that of the language model alone, especially for the case of recognized words. Results suggest that DAs are redundantly marked in natural conversation, and that a variety of automatically extractable prosodic features could aid dialog processing in speech applications.",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "d8253659de704969cd9c30b3ea7543c5",
"text": "Frequent itemset mining is an important step of association rules mining. Traditional frequent itemset mining algorithms have certain limitations. For example Apriori algorithm has to scan the input data repeatedly, which leads to high I/O load and low performance, and the FP-Growth algorithm is limited by the capacity of computer's inner stores because it needs to build a FP-tree and mine frequent itemset on the basis of the FP-tree in memory. With the coming of the Big Data era, these limitations are becoming more prominent when confronted with mining large-scale data. In this paper, DPBM, a distributed matrix-based pruning algorithm based on Spark, is proposed to deal with frequent itemset mining. DPBM can greatly reduce the amount of candidate itemset by introducing a novel pruning technique for matrix-based frequent itemset mining algorithm, an improved Apriori algorithm which only needs to scan the input data once. In addition, each computer node reduces greatly the memory usage by implementing DPBM under a latest distributed environment-Spark, which is a lightning-fast distributed computing. The experimental results show that DPBM have better performance than MapReduce-based algorithms for frequent itemset mining in terms of speed and scalability.",
"title": ""
},
{
"docid": "d8c64128c89f3a291b410eefbf00dab2",
"text": "We review the prospects of using yeasts and microalgae as sources of cheap oils that could be used for biodiesel. We conclude that yeast oils, the cheapest of the oils producible by heterotrophic microorganisms, are too expensive to be viable alternatives to the major commodity plant oils. Algal oils are similarly unlikely to be economic; the cheapest form of cultivation is in open ponds which then requires a robust, fast-growing alga that can withstand adventitious predatory protozoa or contaminating bacteria and, at the same time, attain an oil content of at least 40% of the biomass. No such alga has yet been identified. However, we note that if the prices of the major plant oils and crude oil continue to rise in the future, as they have done over the past 12 months, then algal lipids might just become a realistic alternative within the next 10 to 15 years. Better prospects would, however, be to focus on algae as sources of polyunsaturated fatty acids.",
"title": ""
},
{
"docid": "227d8ad4000e6e1d9fd1aa6bff8ed64c",
"text": "Recently, speed sensorless control of Induction Motor (IM) drives received great attention to avoid the different problems associated with direct speed sensors. Among different rotor speed estimation techniques, Model Reference Adaptive System (MRAS) schemes are the most common strategies employed due to their relative simplicity and low computational effort. In this paper a novel adaptation mechanism is proposed which replaces normally used conventional Proportional-Integral (PI) controller in MRAS adaptation mechanism by a Fractional Order PI (FOPI) controller. The performance of two adaptation mechanism controllers has been verified through simulation results using MATLAB/SIMULINK software. It is seen that the performance of the induction motor has improved when FOPI controller is used in place of classical PI controller.",
"title": ""
},
{
"docid": "4a4a868d64a653fac864b5a7a531f404",
"text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.",
"title": ""
},
{
"docid": "2d78a4c914c844a3f28e8f3b9f65339f",
"text": "The availability of abundant data posts a challenge to integrate static customer data and longitudinal behavioral data to improve performance in customer churn prediction. Usually, longitudinal behavioral data are transformed into static data before being included in a prediction model. In this study, a framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data. A novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated. A three phase training algorithm for the H-MK-SVM is developed, implemented and tested. The H-MK-SVM constructs a classification function by estimating the coefficients of both static and longitudinal behavioral variables in the training process without transformation of the longitudinal behavioral data. The training process of the H-MK-SVM is also a feature selection and time subsequence selection process because the sparse non-zero coefficients correspond to the variables selected. Computational experiments using three real-world databases were conducted. Computational results using multiple criteria measuring performance show that the H-MK-SVM directly using longitudinal behavioral data performs better than currently available classifiers.",
"title": ""
},
{
"docid": "ce9345c367db70de1dec07cad0343f71",
"text": "Techniques for digital image tampering are becoming widespread for the availability of low cost technology in which the image could be easily manipulated. Copy-move forgery is one of the tampering techniques that are frequently used and has recently received significant attention. But the existing methods, including block-matching and key point matching based methods, are not able to be used to solve the problem of detecting image forgery in both flat region and non-flat region. In this paper, combining the thinking of these two types of methods, we develop a SURF-based method to tackle this problem. In addition to the determination of forgeries in non-flat region through key point features, our method can be used to detect flat region in images in an effective way, and extract FMT features after blocking the region. By using matching algorithms of similar blocked images, image forgeries in flat region can be determined, which results in the completing of the entire image tamper detection. Experimental results are presented to demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "02ad9bef7d38af14c01ceb6efec8078b",
"text": "Weakness of the will may lead to ineffective goal striving in the sense that people lacking willpower fail to get started, to stay on track, to select instrumental means, and to act efficiently. However, using a simple self-regulation strategy (i.e., forming implementation intentions or making if–then plans) can get around this problem by drastically improving goal striving on the spot. After an overview of research investigating how implementation intentions work, I will discuss how people can use implementation intentions to overcome potential hindrances to successful goal attainment. Extensive empirical research shows that implementation intentions help people to meet their goals no matter whether these hindrances originate from within (e.g., lack of cognitive capabilities) or outside the person (i.e., difficult social situations). Moreover, I will report recent research demonstrating that implementation intentions can even be used to control impulsive cognitive, affective, and behavioral responses that interfere with one’s focal goal striving. In ending, I will present various new lines of implementation intention research, and raise a host of open questions that still deserve further empirical and theoretical analysis.",
"title": ""
},
{
"docid": "aa70864ca9d2285eebe5b46f7c283ebe",
"text": "The centerpiece of this thesis is a new processing paradigm for exploiting instruction level parallelism. This paradigm, called the multiscalar paradigm, splits the program into many smaller tasks, and exploits fine-grain parallelism by executing multiple, possibly (control and/or data) dependent tasks in parallel using multiple processing elements. Splitting the instruction stream at statically determined boundaries allows the compiler to pass substantial information about the tasks to the hardware. The processing paradigm can be viewed as extensions of the superscalar and multiprocessing paradigms, and shares a number of properties of the sequential processing model and the dataflow processing model. The multiscalar paradigm is easily realizable, and we describe an implementation of the multiscalar paradigm, called the multiscalar processor. The central idea here is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. The multiscalar processor supports speculative execution, allows arbitrary dynamic code motion (facilitated by an efficient hardware memory disambiguation mechanism), exploits communication localities, and does all of these with hardware that is fairly straightforward to build. Other desirable aspects of the implementation include decentralization of the critical resources, absence of wide associative searches, and absence of wide interconnection/data paths.",
"title": ""
},
{
"docid": "000652922defcc1d500a604d43c8f77b",
"text": "The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.",
"title": ""
},
{
"docid": "6162ad3612b885add014bd09baa5f07a",
"text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.",
"title": ""
},
{
"docid": "29d1502c7edea13ce67aa1e283dc8488",
"text": "An explosive growth in the volume, velocity, and variety of the data available on the Internet has been witnessed recently. The data originated frommultiple types of sources including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, health data has led to one of the most challenging research issues of the big data era. In this paper, Knowle—an online news management system upon semantic link network model is introduced. Knowle is a news event centrality data management system. The core elements of Knowle are news events on the Web, which are linked by their semantic relations. Knowle is a hierarchical data system, which has three different layers including the bottom layer (concepts), the middle layer (resources), and the top layer (events). The basic blocks of the Knowle system—news collection, resources representation, semantic relations mining, semantic linking news events are given. Knowle does not require data providers to follow semantic standards such as RDF or OWL, which is a semantics-rich self-organized network. It reflects various semantic relations of concepts, news, and events. Moreover, in the case study, Knowle is used for organizing andmining health news, which shows the potential on forming the basis of designing and developing big data analytics based innovation framework in the health domain. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b16407fc67058110b334b047bcfea9ac",
"text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@",
"title": ""
},
{
"docid": "c684de3eb8a370e3444aee3a37319b46",
"text": "We present an extended version of our work on the design and implementation of a reference model of the human body, the Master Motor Map (MMM) which should serve as a unifying framework for capturing human motions, their representation in standard data structures and formats as well as their reproduction on humanoid robots. The MMM combines the definition of a comprehensive kinematics and dynamics model of the human body with 104 DoF including hands and feet with procedures and tools for unified capturing of human motions. We present online motion converters for the mapping of human and object motions to the MMM model while taking into account subject specific anthropométrie data as well as for the mapping of MMM motion to a target robot kinematics. Experimental evaluation of the approach performed on VICON motion recordings demonstrate the benefits of the MMM as an important step towards standardized human motion representation and mapping to humanoid robots.",
"title": ""
}
] |
scidocsrr
|
a28e7cdf3a39ff608c0d62daf4268019
|
Grounding Topic Models with Knowledge Bases
|
[
{
"docid": "8d8dc05c2de34440eb313503226f7e99",
"text": "Disambiguating entity references by annotating them with unique ids from a catalog is a critical step in the enrichment of unstructured content. In this paper, we show that topic models, such as Latent Dirichlet Allocation (LDA) and its hierarchical variants, form a natural class of models for learning accurate entity disambiguation models from crowd-sourced knowledge bases such as Wikipedia. Our main contribution is a semi-supervised hierarchical model called Wikipedia-based Pachinko Allocation Model} (WPAM) that exploits: (1) All words in the Wikipedia corpus to learn word-entity associations (unlike existing approaches that only use words in a small fixed window around annotated entity references in Wikipedia pages), (2) Wikipedia annotations to appropriately bias the assignment of entity labels to annotated (and co-occurring unannotated) words during model learning, and (3) Wikipedia's category hierarchy to capture co-occurrence patterns among entities. We also propose a scheme for pruning spurious nodes from Wikipedia's crowd-sourced category hierarchy. In our experiments with multiple real-life datasets, we show that WPAM outperforms state-of-the-art baselines by as much as 16% in terms of disambiguation accuracy.",
"title": ""
},
{
"docid": "f6121f69419a074b657bb4a0324bae4a",
"text": "Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling needs to scale to larger topic spaces and use richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. We evaluate SC-LDA’s ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but is significantly faster. 1 Challenge: Leveraging Prior Knowledge in Large-scale Topic Models Topic models, such as Latent Dirichlet Allocation (Blei et al., 2003, LDA), have been successfully used for discovering hidden topics in text collections. LDA is an unsupervised model—it requires no annotation—and discovers, without any supervision, the thematic trends in a text collection. However, LDA’s lack of supervision can lead to disappointing results. Often, the hidden topics learned by LDA fail to make sense to end users. Part of the problem is that the objective function of topic models does not always correlate with human judgments of topic quality (Chang et al., 2009). Therefore, it’s often necessary to incorporate prior knowledge into topic models to improve the model’s performance. Recent work has also shown that by interactive human feedback can improve the quality and stability of topics (Hu and Boyd-Graber, 2012; Yang et al., 2015). Information about documents (Ramage et al., 2009) or words (Boyd-Graber et al., 2007) can improve LDA’s topics. In addition to its occasional inscrutability, scalability can also hamper LDA’s adoption. Conventional Gibbs sampling—the most widely used inference for LDA—scales linearly with the number of topics. Moreover, accurate training usually takes many sampling passes over the dataset. Therefore, for large datasets with millions or even billions of tokens, conventional Gibbs sampling takes too long to finish. For standard LDA, recently introduced fast sampling methods (Yao et al., 2009; Li et al., 2014; Yuan et al., 2015) enable industrial applications of topic modeling to search engines and online advertising, where capturing the “long tail” of infrequently used topics requires large topic spaces. For example, while typical LDA models in academic papers have up to 103 topics, industrial applications with 105–106 topics are common (Wang et al., 2014). Moreover, scaling topic models to many topics can also reveal the hierarchical structure of topics (Downey et al., 2015). Thus, there is a need for topic models that can both benefit from rich prior information and that can scale to large datasets. However, existing methods for improving scalability focus on topic models without prior information. To rectify this, we propose a factor graph model that encodes a potential function over the hidden topic variables, encouraging topics consistent with prior knowledge. The factor model representation admits an efficient sampling algorithm that takes advantage of the model’s sparsity. We show that our method achieves comparable performance but runs significantly faster than baseline methods, enabling models to discover models with many topics enriched by prior knowledge. 2 Efficient Algorithm for Incorporating Knowledge into LDA In this section, we introduce the factor model for incorporating prior knowledge and show how to efficiently use Gibbs sampling for inference. 2.1 Background: LDA and SparseLDA A statistical topic model represents words in documents in a collection D as mixtures of T topics, which are multinomials over a vocabulary of size V . In LDA, each document d is associated with a multinomial distribution over topics, θd. The probability of a word type w given topic z is φw|z . The multinomial distributions θd and φz are drawn from Dirichlet distributions: α and β are the hyperparameters for θ and φ. We represent the document collection D as a sequence of words w, and topic assignments as z. We use symmetric priors α and β in the model and experiment, but asymmetric priors are easily encoded in the models (Wallach et al., 2009). Discovering the latent topic assignments z from observed words w requires inferring the the posterior distribution P (z|w). Griffiths and Steyvers (2004) propose using collapsed Gibbs sampling. The probability of a topic assignment z = t in document d given an observed word type w and the other topic assignments z− is P (z = t|z−, w) ∝ (nd,t + α) nw,t + β",
"title": ""
},
{
"docid": "ef31d8b3cd83aeb109f62fde4cd8bc8a",
"text": "Many existing knowledge bases (KBs), including Freebase, Yago, and NELL, rely on a fixed ontology, given as an input to the system, which defines the data to be cataloged in the KB, i.e., a hierarchy of categories and relations between them. The system then extracts facts that match the predefined ontology. We propose an unsupervised model that jointly learns a latent ontological structure of an input corpus, and identifies facts from the corpus that match the learned structure. Our approach combines mixed membership stochastic block models and topic models to infer a structure by jointly modeling text, a latent concept hierarchy, and latent semantic relationships among the entities mentioned in the text. As a case study, we apply the model to a corpus of Web documents from the software domain, and evaluate the accuracy of the various components of the learned ontology.",
"title": ""
}
] |
[
{
"docid": "814aa0089ce9c5839d028d2e5aca450d",
"text": "Espresso is a document-oriented distributed data serving platform that has been built to address LinkedIn's requirements for a scalable, performant, source-of-truth primary store. It provides a hierarchical document model, transactional support for modifications to related documents, real-time secondary indexing, on-the-fly schema evolution and provides a timeline consistent change capture stream. This paper describes the motivation and design principles involved in building Espresso, the data model and capabilities exposed to clients, details of the replication and secondary indexing implementation and presents a set of experimental results that characterize the performance of the system along various dimensions.\n When we set out to build Espresso, we chose to apply best practices in industry, already published works in research and our own internal experience with different consistency models. Along the way, we built a novel generic distributed cluster management framework, a partition-aware change- capture pipeline and a high-performance inverted index implementation.",
"title": ""
},
{
"docid": "75b2f12152526a0fbc5648261faca1cc",
"text": "Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",
"title": ""
},
{
"docid": "44e135418dc6480366bb5679b62bc4f9",
"text": "There is growing interest regarding the role of the right inferior frontal gyrus (RIFG) during a particular form of executive control referred to as response inhibition. However, tasks used to examine neural activity at the point of response inhibition have rarely controlled for the potentially confounding effects of attentional demand. In particular, it is unclear whether the RIFG is specifically involved in inhibitory control, or is involved more generally in the detection of salient or task relevant cues. The current fMRI study sought to clarify the role of the RIFG in executive control by holding the stimulus conditions of one of the most popular response inhibition tasks-the Stop Signal Task-constant, whilst varying the response that was required on reception of the stop signal cue. Our results reveal that the RIFG is recruited when important cues are detected, regardless of whether that detection is followed by the inhibition of a motor response, the generation of a motor response, or no external response at all.",
"title": ""
},
{
"docid": "bc35d87706c66350f4cec54befc9acc2",
"text": "This paper presents a new improved term frequency/inverse document frequency (TF-IDF) approach which uses confidence, support and characteristic words to enhance the recall and precision of text classification. Synonyms defined by a lexicon are processed in the improved TF-IDF approach. We detailedly discuss and analyze the relationship among confidence, recall and precision. The experiments based on science and technology gave promising results that the new TF-IDF approach improves the precision and recall of text classification compared with the conventional TF-IDF approach.",
"title": ""
},
{
"docid": "88a4ab49e7d3263d5d6470d123b6e74b",
"text": "Graph databases have gained renewed interest in the last years, due to its applications in areas such as the Semantic Web and Social Networks Analysis. We study the problem of querying graph databases, and, in particular, the expressiveness and complexity of evaluation for several general-purpose query languages, such as the regular path queries and its extensions with conjunctions and inverses. We distinguish between two semantics for these languages. The first one, based on simple paths, easily leads to intractability, while the second one, based on arbitrary paths, allows tractable evaluation for an expressive family of languages.\n We also study two recent extensions of these languages that have been motivated by modern applications of graph databases. The first one allows to treat paths as first-class citizens, while the second one permits to express queries that combine the topology of the graph with its underlying data.",
"title": ""
},
{
"docid": "625b96d21cb9ff05785aa34c98c567ff",
"text": "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"title": ""
},
{
"docid": "10646c29afc4cc5c0a36ca508aabb41a",
"text": "As high-resolution fingerprint images are becoming more common, the pores have been found to be one of the promising candidates in improving the performance of automated fingerprint identification systems (AFIS). This paper proposes a deep learning approach towards pore extraction. It exploits the feature learning and classification capability of convolutional neural networks (CNNs) to detect pores on fingerprints. Besides, this paper also presents a unique affine Fourier moment-matching (AFMM) method of matching and fusing the scores obtained for three different fingerprint features to deal with both local and global linear distortions. Combining the two aforementioned contributions, an EER of 3.66% can be observed from the experimental results.",
"title": ""
},
{
"docid": "0a0ca1f866a4be1a3f264c6e3c888adc",
"text": "Printed circuit board (PCB) windings are convenient for many applications given their ease of manufacture, high repeatability, and low profile. In many cases, the use of multistranded litz wires is appropriate due to the rated power, frequency range, and efficiency constraints. This paper proposes a manufacturing technique and a semianalytical loss model for PCB windings using planar litz structure to obtain a similar ac loss reduction to that of conventional windings of round wires with litz structure. Different coil prototypes have been tested in several configurations to validate the proposal.",
"title": ""
},
{
"docid": "c77042cb1a8255ac99ebfbc74979c3c6",
"text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.",
"title": ""
},
{
"docid": "ecfb05d557ebe524e3821fcf6ce0f985",
"text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.",
"title": ""
},
{
"docid": "3301a0cf26af8d4d8c7b2b9d56cec292",
"text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
"title": ""
},
{
"docid": "7beeea42e8f5d0f21ea418aa7f433ab9",
"text": "This application note describes principles and uses for continuous ST segment monitoring. It also provides a detailed description of the ST Analysis algorithm implemented in the multi-lead ST/AR (ST and Arrhythmia) algorithm, and an assessment of the ST analysis algorithm's performance.",
"title": ""
},
{
"docid": "d540250c51e97622a10bcb29f8fde956",
"text": "With many advantages of rectangular waveguide and microstrip lines, substrate integrated waveguide (SIW) can be used for design of planar waveguide-like slot antenna. However, the bandwidth of this kind of antenna structure is limited. In this work, a parasitic dipole is introduced and coupled with the SIW radiate slot. The results have indicated that the proposed technique can enhance the bandwidth of the SIW slot antenna significantly. The measured bandwidth of fabricated antenna prototype is about 19%, indicating about 115% bandwidth enhancement than the ridged substrate integrated waveguide (RSIW) slot antenna.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "4e2bed31e5406e30ae59981fa8395d5b",
"text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.",
"title": ""
},
{
"docid": "7d5215dc3213b13748f97aa21898e86e",
"text": "Several tasks in computer vision and machine learning can be modeled as MRF-MAP inference problems. Using higher order potentials to model complex dependencies can significantly improve the performance. The problem can often be modeled as minimizing a sum of submodular (SoS) functions. Since sum of submodular functions is also submodular, existing submodular function minimization (SFM) techniques can be employed for optimal inference in polynomial time [1], [2]. These techniques, though oblivious to the clique sizes, have limited scalability in the number of pixels. On the other hand, state of the art algorithms in computer vision [3], [47] can handle problems with a large number of pixels but fail to scale to large clique sizes. In this paper, we adapt two SFM algorithms [1], [5], to exploit the sum of submodular structure, thereby helping them scale to large number of pixels while maintaining scalability with large clique sizes. Our ideas are general enough and can be extended to adapt other existing SFM algorithms as well. Our experiments on computer vision problems demonstrate that our approach can easily scale up to clique sizes of 300, thereby unlocking the usage of really large sized cliques for MRF-MAP inference problems.",
"title": ""
},
{
"docid": "07e03419430b7ea8ca3c7b02f9340d46",
"text": "Recently, [2] presented a security attack on the privacy-preserving outsourcing scheme for biometric identification proposed in [1]. In [2], the author claims that the scheme CloudBI-II proposed in [1] can be broken under the collusion case. That is, when the cloud server acts as a user to submit a number of identification requests, CloudBI-II is no longer secure. In this technical report, we will explicitly show that the attack method proposed in [2] doesn’t work in fact.",
"title": ""
},
{
"docid": "b97c9e8238f74539e8a17dcffecdd35f",
"text": "This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.",
"title": ""
},
{
"docid": "ef0625150b0eb6ae68a214256e3db50d",
"text": "Undergraduate engineering students require a practical application of theoretical concepts learned in classrooms in order to appropriate a complete management of them. Our aim is to assist students to learn control systems theory in an engineering context, through the design and implementation of a simple and low cost ball and plate plant. Students are able to apply mathematical and computational modelling tools, control systems design, and real-time software-hardware implementation while solving a position regulation problem. The whole project development is presented and may be assumed as a guide for replicate results or as a basis for a new design approach. In both cases, we end up in a tool available to implement and assess control strategies experimentally.",
"title": ""
},
{
"docid": "72fec6dc287b0aa9aea97a22268c1125",
"text": "Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? This problem arises in the finance industry, where the correlations are between stocks. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. We show how the modified alternating projections method can be used to compute the solution for the more commonly used of the weighted Frobenius norms. In the finance application the original matrix has many zero or negative eigenvalues; we show that for a certain class of weights the nearest correlation matrix has correspondingly many zero eigenvalues and that this fact can be exploited in the computation.",
"title": ""
}
] |
scidocsrr
|
1cbb8aac17cdcd4ff4ffb8a537dfbe54
|
Multilevel Inverter For Grid-Connected PV System Employing Digital PI Controller
|
[
{
"docid": "913709f4fe05ba2783c3176ed00015fe",
"text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "3e4a2d4564e9904b3d3b0457860da5cf",
"text": "Model-based, torque-level control can offer precision and speed advantages over velocity-level or position-level robot control. However, the dynamic parameters of the robot must be identified accurately. Several steps are involved in dynamic parameter identification, including modeling the system dynamics, joint position/torque data acquisition and filtering, experimental design, dynamic parameters estimation and validation. In this paper, we propose a novel, computationally efficient and intuitive optimality criterion to design the excitation trajectory for the robot to follow. Experiments are carried out for a 6 degree of freedom (DOF) Staubli TX-90 robot. We validate the dynamics parameters using torque prediction accuracy and compare to existing methods. The RMS errors of the prediction were small, and the computation time for the new, optimal objective function is an order of magnitude less than for existing approaches. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0f122797e9102c6bab57e64176ee5e84",
"text": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"title": ""
},
{
"docid": "1afe9ff72d69e09c24a11187ea7dca2d",
"text": "In the Intelligent Robotics Laboratory (IRL) at Vanderbilt University we seek to develop service robots with a high level of social intelligence and interactivity. In order to achieve this goal, we have identified two main issues for research. The first issue is how to achieve a high level of interaction between the human and the robot. This has lead to the formulation of our philosophy of Human Directed Local Autonomy (HuDL), a guiding principle for research, design, and implementation of service robots. The motivation for integrating humans into a service robot system is to take advantage of human intelligence and skill. Human intelligence can be used to interpret robot sensor data, eliminating computationally expensive and possibly error-prone automated analyses. Human skill is a valuable resource for trajectory and path planning as well as for simplifying the search process. In this paper we present our plans for integrating humans into a service robot system. We present our paradigm for human/robot interaction, HuDL. The second issue is the general problem of system integration, with a specific focus on integrating humans into the service robotic system. This work has lead to the development of the Intelligent Machine Architecture (IMA), a novel software architecture that has been specifically designed to simplify the integration of the many diverse algorithms, sensors, and actuators necessary for socially intelligent service robots. Our testbed system is described, and some example applications of HuDL for aids to the physically disabled are given. An evaluation of the effectiveness of the IMA is also presented.",
"title": ""
},
{
"docid": "d0cf952865b72f25d9b8b049f717d976",
"text": "In this paper, we consider the problem of estimating the relative expertise score of users in community question and answering services (CQA). Previous approaches typically only utilize the explicit question answering relationship between askers and an-swerers and apply link analysis to address this problem. The im-plicit pairwise comparison between two users that is implied in the best answer selection is ignored. Given a question and answering thread, it's likely that the expertise score of the best answerer is higher than the asker's and all other non-best answerers'. The goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users. Formally, we treat each pairwise comparison between two users as a two-player competition with one winner and one loser. Two competition models are proposed to estimate user expertise from pairwise comparisons. Using the NTCIR-8 CQA task data with 3 million questions and introducing answer quality prediction based evaluation metrics, the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users. Furthermore, it's shown that pairwise comparison based competi-tion models have better discriminative power than other methods. It's also found that answer quality (best answer) is an important factor to estimate user expertise.",
"title": ""
},
{
"docid": "b6f4bd15f7407b56477eb2cfc4c72801",
"text": "In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.",
"title": ""
},
{
"docid": "a1b24627f8ba518fa9285596cc931e32",
"text": "[3] Rakesh Agrawal and Arun Swami. A one-pass space-efficient algorithm for finding quantiles. A one-pass algorithm for accurately estimating quantiles for disk-resident data. [8] Jürgen Beringer and Eyke Hüllermeier. An efficient algorithm for instance-based learning on data streams.",
"title": ""
},
{
"docid": "0ce7465e40b3b13e5c316fb420a766d9",
"text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.",
"title": ""
},
{
"docid": "15f099c342b7f9beae9c0b193f49f7f4",
"text": "We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed.",
"title": ""
},
{
"docid": "2e1cb87045b5356a965aa52e9e745392",
"text": "Community detection is a common problem in graph data analytics that consists of finding groups of densely connected nodes with few connections to nodes outside of the group. In particular, identifying communities in large-scale networks is an important task in many scientific domains. In this review, we evaluated eight state-of-the-art and five traditional algorithms for overlapping and disjoint community detection on large-scale real-world networks with known ground-truth communities. These 13 algorithms were empirically compared using goodness metrics that measure the structural properties of the identified communities, as well as performance metrics that evaluate these communities against the ground-truth. Our results show that these two types of metrics are not equivalent. That is, an algorithm may perform well in terms of goodness metrics, but poorly in terms of performance metrics, or vice versa. © 2014 The Authors. WIREs Computational Statistics published by Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "b1b2a83d67456c0f0bf54092cbb06e65",
"text": "The transmission of voice communications as datagram packets over IP networks, commonly known as voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other. The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the \"human authentication\" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.",
"title": ""
},
{
"docid": "70366939a4386fd4a712efc704c8e248",
"text": "k-Means is a versatile clustering algorithm widely used in practice. To cluster large data sets, state-of-the-art implementations use GPUs to shorten the data to knowledge time. These implementations commonly assign points on a GPU and update centroids on a CPU. We identify two main shortcomings of this approach. First, it requires expensive data exchange between processors when switching between the two processing steps point assignment and centroid update. Second, even when processing both steps of k-means on the same processor, points still need to be read two times within an iteration, leading to inefficient use of memory bandwidth. In this paper, we present a novel approach for centroid update that allows us to efficiently process both phases of k-means on GPUs. We fuse point assignment and centroid update to execute one iteration with a single pass over the points. Our evaluation shows that our k-means approach scales to very large data sets. Overall, we achieve up to 20 × higher throughput compared to the state-of-the-art approach.",
"title": ""
},
{
"docid": "f9cba94dee194cb38923a3ba47b0a2b6",
"text": "We investigate the value of feature engineering and neural network models for predicting successful writing. Similar to previous work, we treat this as a binary classification task and explore new strategies to automatically learn representations from book contents. We evaluate our feature set on two different corpora created from Project Gutenberg books. The first presents a novel approach for generating the gold standard labels for the task and the other is based on prior research. Using a combination of hand-crafted and recurrent neural network learned representations in a dual learning setting, we obtain the best performance of 73.50% weighted F1-score.",
"title": ""
},
{
"docid": "82857fedec78e8317498e3c66268d965",
"text": "In this paper, we provide an improved evolutionary algorithm for bilevel optimization. It is an extension of a recently proposed Bilevel Evolutionary Algorithm based on Quadratic Approximations (BLEAQ). Bilevel optimization problems are known to be difficult and computationally demanding. The recently proposed BLEAQ approach has been able to bring down the computational expense significantly as compared to the contemporary approaches. The strategy proposed in this paper further improves the algorithm by incorporating archiving and local search. Archiving is used to store the feasible members produced during the course of the algorithm that provide a larger pool of members for better quadratic approximations of optimal lower level solutions. Frequent local searches at upper level supported by the quadratic approximations help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems, and comparison results against the contemporary approaches are also provided.",
"title": ""
},
{
"docid": "5123d52a50b75e37e90ed7224d531a18",
"text": "Tarlov or perineural cysts are nerve root cysts found most commonly at the sacral spine level arising between covering layers of the perineurium and the endoneurium near the dorsal root ganglion. The cysts are relatively rare and most of them are asymptomatic. Some Tarlov cysts can exert pressure on nerve elements resulting in pain, radiculopathy and even multiple radiculopathy of cauda equina. There is no consensus on the appropriate therapeutic options of Tarlov cysts. The authors present a case of two sacral cysts diagnosed with magnetic resonance imaging. The initial symptoms were low back pain and sciatica and progressed to cauda equina syndrome. Surgical treatment was performed by sacral laminectomy and wide cyst fenestration. The neurological deficits were recovered and had not recurred after a follow-up period of nine months. The literature was reviewed and discussed. This is the first reported case in Thailand.",
"title": ""
},
{
"docid": "39710768ed8ec899e412cccae7e7d262",
"text": "Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an arbitrary learning algorithm, in the face of b) a broader class of adversarial models than any prior methods. We show that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and show how to extend this result to account for approximations of evasion attacks. Extensive experimental evaluation demonstrates that our retraining methods are nearly indistinguishable from state-of-the-art algorithms for optimizing adversarial risk, but are more general and far more scalable. The experiments also confirm that without retraining, our adversarial framework dramatically reduces the effectiveness of learning. In contrast, retraining significantly boosts robustness to evasion attacks without significantly compromising overall accuracy.",
"title": ""
},
{
"docid": "8474b5b3ed5838e1d038e73579168f40",
"text": "For the first time to the best of our knowledge, this paper provides an overview of millimeter-wave (mmWave) 5G antennas for cellular handsets. Practical design considerations and solutions related to the integration of mmWave phased-array antennas with beam switching capabilities are investigated in detail. To experimentally examine the proposed methodologies, two types of mesh-grid phased-array antennas featuring reconfigurable horizontal and vertical polarizations are designed, fabricated, and measured at the 60 GHz spectrum. Afterward the antennas are integrated with the rest of the 60 GHz RF and digital architecture to create integrated mmWave antenna modules and implemented within fully operating cellular handsets under plausible user scenarios. The effectiveness, current limitations, and required future research areas regarding the presented mmWave 5G antenna design technologies are studied using mmWave 5G system benchmarks.",
"title": ""
},
{
"docid": "a0c3d1bae7b670884afd3e7119fcd095",
"text": "Twitter is a widely-used social networking service which enables its users to post text-based messages, so-called tweets. POI tags on tweets can show more human-readable high-level information about a place rather than just a pair of coordinates. In this paper, we attempt to predict the POI tag of a tweet based on its textual content and time of posting. Potential applications include accurate positioning when GPS devices fail and disambiguating places located near each other. We consider this task as a ranking problem, i.e., we try to rank a set of candidate POIs according to a tweet by using language and time models. To tackle the sparsity of tweets tagged with POIs, we use web pages retrieved by search engines as an additional source of evidence. From our experiments, we find that users indeed leak some information about their accurate locations in their tweets.",
"title": ""
},
{
"docid": "3cdc2052eb37bdbb1f7d38ec90a095c4",
"text": "We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring highintensity pixels during the blur process. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios.",
"title": ""
},
{
"docid": "35dc1eed6439bae9c74605e75bf8b3a2",
"text": "We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.",
"title": ""
},
{
"docid": "7d74b896764837904019a0abff967065",
"text": "Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as \\bifurcation points\". At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations.",
"title": ""
}
] |
scidocsrr
|
9901f05894b9deb977fd2f8ab00096ad
|
Analysis of the antecedents of knowledge sharing and its implication for SMEs internationalization
|
[
{
"docid": "d5464818af641aae509549f586c5526d",
"text": "The learning and knowledge that we have, is, at the most, but little compared with that of which we are ignorant. Plato Knowledge management (KM) is a vital and complex topic of current interest to so many in business, government and the community in general, that there is an urgent need to expand the role of empirical research to inform knowledge management practice. However, one of the most striking aspects of knowledge management is the diversity of the field and the lack of universally accepted definitions of the term itself and its derivatives, knowledge and management. As a consequence of the multidisciplinary nature of KM, the terms inevitably hold a difference in meaning and emphasis for different people. The initial chapter of this book addresses the challenges brought about by these differences. This chapter begins with a critical assessment of some diverse frameworks for knowledge management that have been appearing in the international academic literature of many disciplines for some time. Then follows a description of ways that these have led to some holistic and integrated frameworks currently being developed by KM researchers in Australia.",
"title": ""
},
{
"docid": "5e04372f08336da5b8ab4d41d69d3533",
"text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.",
"title": ""
}
] |
[
{
"docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c",
"text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.",
"title": ""
},
{
"docid": "affc663476dc4d5299de5f89f67e5f5a",
"text": "Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empirically and theoretically, that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense; and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.",
"title": ""
},
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "6e848928859248e0597124cee0560e43",
"text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.",
"title": ""
},
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "3ba65ec924fff2d246197bb2302fb86e",
"text": "Guidelines for evaluating the levels of evidence based on quantitative research are well established. However, the same cannot be said for the evaluation of qualitative research. This article discusses a process members of an evidence-based clinical practice guideline development team with the Association of Women's Health, Obstetric and Neonatal Nurses used to create a scoring system to determine the strength of qualitative research evidence. A brief history of evidence-based clinical practice guideline development is provided, followed by discussion of the development of the Nursing Management of the Second Stage of Labor evidence-based clinical practice guideline. The development of the qualitative scoring system is explicated, and implications for nursing are proposed.",
"title": ""
},
{
"docid": "46ff38a51f766cd5849a537cc0632660",
"text": "BACKGROUND\nLinear IgA bullous dermatosis (LABD) is an acquired autoimmune sub-epidermal vesiculobullous disease characterized by continuous linear IgA deposit on the basement membrane zone, as visualized on direct immunofluorescence microscopy. LABD can affect both adults and children. The disease is very uncommon, with a still unknown incidence in the South American population.\n\n\nMATERIALS AND METHODS\nAll confirmed cases of LABD by histological and immunofluorescence in our hospital were studied.\n\n\nRESULTS\nThe confirmed cases were three females and two males, aged from 8 to 87 years. Precipitant events associated with LABD were drug consumption (non-steroid inflammatory agents in two cases) and ulcerative colitis (one case). Most of our patients were treated with dapsone, resulting in remission.\n\n\nDISCUSSION\nOur series confirms the heterogeneous clinical features of this uncommon disease in concordance with a larger series of patients reported in the literature.",
"title": ""
},
{
"docid": "7970ec4bd6e17d70913d88e07a39f82d",
"text": "This thesis deals with Chinese characters (Hanzi): their key characteristics and how they could be used as a kind of knowledge resource in the (Chinese) NLP. Part 1 deals with basic issues. In Chapter 1, the motivation and the reasons for reconsidering the writing system will be presented, and a short introduction to Chinese and its writing system will be given in Chapter 2. Part 2 provides a critical review of the current, ongoing debate about Chinese characters. Chapter 3 outlines some important linguistic insights from the vantage point of indigenous scriptological and Western linguistic traditions, as well as a new theoretical framework in contemporary studies of Chinese characters. The focus of Chapter 4 concerns the search for appropriate mathematical descriptions with regard to the systematic knowledge information hidden in characters. The subject matter of mathematical formalization of the shape structure of Chinese characters is depicted as well. Part 3 illustrates the representation issues. Chapter 5 addresses the design and construction of the HanziNet, an enriched conceptual network of Chinese characters. Topics that are covered in this chapter include the ideas, architecture, methods and ontology design. In Part 4, a case study based on the above mentioned ideas will be launched. Chapter 6 presents an experiment exploring the character-triggered semantic class of Chinese unknown words. Finally, Chapter 7 summarizes the major findings of this thesis. Next, it depicts some potential avenues in the future, and assesses the theoretical implications of these findings for computational linguistic theory.",
"title": ""
},
{
"docid": "09085fc15308a96cd9441bb0e23e6c1a",
"text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.",
"title": ""
},
{
"docid": "a017ab9f310f9f36f88bf488ac833f05",
"text": "Wireless data communication technology has eliminated wired connections for data transfer to portable devices. Wireless power technology offers the possibility of eliminating the remaining wired connection: the power cord. For ventricular assist devices (VADs), wireless power technology will eliminate the complications and infections caused by the percutaneous wired power connection. Integrating wireless power technology into VADs will enable VAD implants to become a more viable option for heart failure patients (of which there are 80 000 in the United States each year) than heart transplants. Previous transcutaneous energy transfer systems (TETS) have attempted to wirelessly power VADs ; however, TETS-based technologies are limited in range to a few millimeters, do not tolerate angular misalignment, and suffer from poor efficiency. The free-range resonant electrical delivery (FREE-D) wireless power system aims to use magnetically coupled resonators to efficiently transfer power across a distance to a VAD implanted in the human body, and to provide robustness to geometric changes. Multiple resonator configurations are implemented to improve the range and efficiency of wireless power transmission to both a commercially available axial pump and a VentrAssist centrifugal pump [3]. An adaptive frequency tuning method allows for maximum power transfer efficiency for nearly any angular orientation over a range of separation distances. Additionally, laboratory results show the continuous operation of both pumps using the FREE-D system with a wireless power transfer efficiency upwards of 90%.",
"title": ""
},
{
"docid": "819f5df03cebf534a51eb133cd44cb0d",
"text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "be41d072e3897506fad111549e7bf862",
"text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "b851cf64be0684f63e63e7317aaada5c",
"text": "With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of potentially sensitive personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files while protecting privacy of both the queries and the database, by allowing some reasonable leakage information. To this end, the notion of searchable symmetric encryption (SSE) was proposed. Meanwhile, recent literature has shown that most dynamic SSE solutions leaking information on updated keywords are vulnerable to devastating file-injection attacks. The only way to thwart these attacks is to design forward-private schemes. In this paper, we investigate new privacy-preserving indexing and query processing protocols which meet a number of desirable properties, including the multi-keyword query processing with conjunction and disjunction logic queries, practically high privacy guarantees with adaptive chosen keyword attack (CKA2) security and forward privacy, the support of dynamic data operations, and so on. Compared with previous schemes, our solutions are highly compact, practical, and flexible. Their performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over a large representative data set demonstrate that our solutions can achieve modest search time efficiency, and they are practical for use in large-scale encrypted database systems.",
"title": ""
},
{
"docid": "124729483d5db255b60690e2facbfe45",
"text": "Human social intelligence depends on a diverse array of perceptual, cognitive, and motivational capacities. Some of these capacities depend on neural systems that may have evolved through modification of ancestral systems with non-social or more limited social functions (evolutionary repurposing). Social intelligence, in turn, enables new forms of repurposing within the lifetime of an individual (cultural and instrumental repurposing), which entail innovating over and exploiting pre-existing circuitry to meet problems our brains did not evolve to solve. Considering these repurposing processes can provide insight into the computations that brain regions contribute to social information processing, generate testable predictions that usefully constrain social neuroscience theory, and reveal biologically imposed constraints on cultural inventions and our ability to respond beneficially to contemporary challenges.",
"title": ""
},
{
"docid": "c5e078cb9835db450be894aee477d00c",
"text": "I would like to jump on the blockchain bandwagon. I would like to be able to say that blockchain is the solution to the longstanding problem of secure identity on the Internet. I would like to say that everyone in the world will soon have a digital identity. Put yourself on the blockchain and never again ask yourself, Who am I? - you are your blockchain address.",
"title": ""
},
{
"docid": "762d6e9a8f0061e3a2f1b1c0eeba2802",
"text": "A new prior is proposed for representation learning, which can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by the phenomenon of consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., consciousness as awareness at a particular time instant. This provides a powerful constraint on the representation in that such low-dimensional thought vectors can correspond to statements about reality which are true, highly probable, or very useful for taking decisions. The fact that a few elements of the current state can be combined into such a predictive or useful statement is a strong constraint and deviates considerably from the maximum likelihood approaches to modelling data and how states unfold in the future based on an agent's actions. Instead of making predictions in the sensory (e.g. pixel) space, the consciousness prior allows the agent to make predictions in the abstract space, with only a few dimensions of that space being involved in each of these predictions. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in the form of facts and rules, although the conscious states may be richer than what can be expressed easily in the form of a sentence, a fact or a rule.",
"title": ""
},
{
"docid": "57e2adea74edb5eaf5b2af00ab3c625e",
"text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] |
scidocsrr
|
2424f6a833428f89922607a490aa2bef
|
City-scale landmark identification on mobile devices
|
[
{
"docid": "a7c330c9be1d7673bfff43b0544db4ea",
"text": "The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index.",
"title": ""
}
] |
[
{
"docid": "304f4cb3872780dd54ebe53d43c37bc6",
"text": "Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have consistently been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, we make several surprising observations which contradict common beliefs. First, we revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model’s conditional distributions. Second, we leverage the control over the quality / diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive. 1 Recent Developments in NLG GANs are an instance of generative models based on a competition between a generator network Gθ and a discriminator network Dφ. The generator network Gθ represents a probability distribution pmodel(x). The discriminator Dφ(x) attempts to distinguish whether an input value x is real (came from the training data) or fake (came from the generator). Mathematically, the GAN objective can be formulated as a minimax game min θ max φ Ex∼pdata [logDφ(x)] + Ex∼Gθ [1− logDφ(x)]. Preprint. Work in progress. ar X iv :1 81 1. 02 54 9v 1 [ cs .C L ] 6 N ov 2 01 8 GANs originally were applied on continuous data like images. This is because the training procedure relied on backpropagation through the discriminator into the generator. Discrete (sequential) data require an alternative approach. [Yu et al., 2017] estimate the gradient to the generator via REINFORCE policy gradients [Williams, 1992]. In their formulation, the discriminator evaluates full sequences. Therefore, to provide error attribution earlier for incomplete sequences and to reduce the variance of gradients they perform k Monte-Carlo rollouts until the sentence is completed. [Yu et al., 2017] advertise their model using two tasks which we argue (with hindsight) are flawed. First, they introduce a synthetic evaluation procedure where the underlying data distribution P is known and can be queried. By representing P with an LSTM (referred to as an oracle in the literature) they directly compute the likelihood of samples drawn from a generative model Gθ. The problem is they benchmark models against each other on this likelihood alone, i.e., the diagnostic is completely blind to diversity. For example, a model that always outputs the same highly likely sequence would easily outperform other potentially superior models. For real data, there was no agreed upon metric to evaluate the quality of unconditional NLG at the time. This led the authors to propose a new metric, Corpus-level BLEU, which computes the fraction of n-grams in a sample that appear in a reference corpus. Again, this metric is agnostic to diversity. Generating a single good sentence over and over will gives a perfect BLEU score. 0.3 0.2 0.1 Negative BLEU-5 0.2 0.3 0.4 0.5",
"title": ""
},
{
"docid": "74c48ec7adb966fc3024ed87f6102a1a",
"text": "Quantitative accessibility metrics are widely used in accessibility evaluation, which synthesize a summative value to represent the accessibility level of a website. Many of these metrics are the results of a two-step process. The first step is the inspection with regard to potential barriers while different properties are reported, and the second step aggregates these fine-grained reports with varying weights for checkpoints. Existing studies indicate that finding appropriate weights for different checkpoint types is a challenging issue. Although some metrics derive the checkpoint weights from the WCAG priority levels, previous investigations reveal that the correlation between the WCAG priority levels and the user experience is not significant. Moreover, our website accessibility evaluation results also confirm the mismatches between the ranking of websites using existing metrics and the ranking based on user experience. To overcome this limitation, we propose a novel metric called the Web Accessibility Experience Metric (WAEM) that can better match the accessibility evaluation results with the user experience of people with disabilities by aligning the evaluation metric with the partial user experience order (PUEXO), i.e. pairwise comparisons between different websites. A machine learning model is developed to derive the optimal checkpoint weights from the PUEXO. Experiments on real-world web accessibility evaluation data sets validate the effectiveness of WAEM.",
"title": ""
},
{
"docid": "f6783c1f37bb125fd35f4fbfedfde648",
"text": "This paper presents an attributed graph-based approach to an intricate data mining problem of revealing affiliated, interdependent entities that might be at risk of being tempted into fraudulent transfer pricing. We formalize the notions of controlled transactions and interdependent parties in terms of graph theory. We investigate the use of clustering and rule induction techniques to identify candidate groups (hot spots) of suspect entities. Further, we find entities that require special attention with respect to transfer pricing audits using network analysis and visualization techniques in IBM i2 Analyst's Notebook.",
"title": ""
},
{
"docid": "3d862e488798629d633f78260a569468",
"text": "Training workshops and professional meetings are important tools for capacity building and professional development. These social events provide professionals and educators a platform where they can discuss and exchange constructive ideas, and receive feedback. In particular, competition-based training workshops where participants compete on solving similar and common challenging problems are effective tools for stimulating students’ learning and aspirations. This paper reports the results of a two-day training workshop where memory and disk forensics were taught using a competition-based security educational tool. The workshop included training sessions for professionals, educators, and students to learn features of Tracer FIRE, a competition-based digital forensics and assessment tool, developed by Sandia National Laboratories. The results indicate that competitionbased training can be very effective in stimulating students’ motivation to learn. However, extra caution should be taken into account when delivering these types of training workshops. Keywords-component; cyber security, digital forenciscs, partcipatory training workshop, competition-based learning,",
"title": ""
},
{
"docid": "8e2da8870546277443a6da9e4284c0f3",
"text": "Executive functions include abilities of goal formation, planning, carrying out goal-directed plans, and effective performance. This article aims at reviewing some of the current knowledge surrounding executive functioning and presenting the contrasting views regarding this concept. The neural substrates of the executive system are examined as well as the evolution of executive functioning, from development to decline. There is clear evidence of the vulnerability of executive functions to the effects of age over lifespan. The first executive function to emerge in children is the ability to inhibit overlearned behavior and the last to appear is verbal fluency. Inhibition of irrelevant information seems to decline earlier than set shifting and verbal fluency during senescence. The sequential progression and decline of these functions has been paralleled with the anatomical changes of the frontal lobe and its connections with other brain areas. Generalization of the results presented here are limited due to methodological differences across studies. Analysis of these differences is presented and suggestions for future research are offered.",
"title": ""
},
{
"docid": "c36bfde4e2f1cd3a5d6d8c0bcb8806d8",
"text": "A 20/20 vision in ophthalmology implies a perfect view of things that are in front of you. The term is also used to mean a perfect sight of the things to come. Here we focus on a speculative vision of the VLDB in the year 2020. This panel is the follow-up of the one I organised (with S. Navathe) at the Kyoto VLDB in 1986, with the title: \"Anyone for a VLDB in the Year 2000?\". In that panel, the members discussed the major advances made in the database area and conjectured on its future, following a concern of many researchers that the database area was running out of interesting research topics and therefore it might disappear into other research topics, such as software engineering, operating systems and distributed systems. That did not happen.",
"title": ""
},
{
"docid": "c551575e68a8061461dc6c78b76a0386",
"text": "Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-ofthe-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text. Keywords—Scene text detection, fully convolutional network, holistic prediction, natural images.",
"title": ""
},
{
"docid": "a05eb1631da751562fd25913b578032a",
"text": "In this paper, we examine the intergenerational gaming practices of four generations of console gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our data highlight the extent to which existing gaming technologies support interactions within collocated intergenerational groups, and our analysis reveals a more generationally flexible suite of roles in these computer-mediated interactions than have been documented by previous studies of more traditional collocated, intergenerational interactions. Finally, we offer implications for game designers who wish to make console games more accessible to intergenerational groups.",
"title": ""
},
{
"docid": "ab0f8feac4000464d406369bea87955a",
"text": "Modern operating system kernels employ address space layout randomization (ASLR) to prevent control-flow hijacking attacks and code-injection attacks. While kernel security relies fundamentally on preventing access to address information, recent attacks have shown that the hardware directly leaks this information. Strictly splitting kernel space and user space has recently been proposed as a theoretical concept to close these side channels. However, this is not trivially possible due to architectural restrictions of the x86 platform. In this paper we present KAISER, a system that overcomes limitations of x86 and provides practical kernel address isolation. We implemented our proof-of-concept on top of the Linux kernel, closing all hardware side channels on kernel address information. KAISER enforces a strict kernel and user space isolation such that the hardware does not hold any information about kernel addresses while running in user mode. We show that KAISER protects against double page fault attacks, prefetch side-channel attacks, and TSX-based side-channel attacks. Finally, we demonstrate that KAISER has a runtime overhead of only 0.28%.",
"title": ""
},
{
"docid": "f043acf163d787c4a53924515b509aba",
"text": "A two-wheeled self-balancing robot is a special type of wheeled mobile robot, its balance problem is a hot research topic due to its unstable state for controlling. In this paper, human transporter model has been established. Kinematic and dynamic models are constructed and two control methods: Proportional-integral-derivative (PID) and Linear-quadratic regulator (LQR) are implemented to test the system model in which controls of two subsystems: self-balance (preventing system from falling down when it moves forward or backward) and yaw rotation (steering angle regulation when it turns left or right) are considered. PID is used to control both two subsystems, LQR is used to control self-balancing subsystem only. By using simulation in Matlab, two methods are compared and discussed. The theoretical investigations for controlling the dynamic behavior are meaningful for design and fabrication. Finally, the result shows that LQR has a better performance than PID for self-balancing subsystem control.",
"title": ""
},
{
"docid": "56d3545ec63503b743a7a80db012d7e5",
"text": "Concrete objects used to illustrate mathematical ideas are commonly known as manipulatives. Manipulatives are ubiquitous in North American elementary classrooms in the early years, and although they can be beneficial, they do not guarantee learning. In the present study, the authors examined two factors hypothesized to impact second-graders’ learning of place value and regrouping with manipulatives: (a) the sequencing of concrete (base-ten blocks) and abstract (written symbols) representations of the standard addition algorithm; and (b) the level of instructional guidance on the structural relations between the representations. Results from a classroom experiment with second-grade students (N = 87) indicated that place value knowledge increased from pre-test to post-test when the base-ten blocks were presented before the symbols, but only when no instructional guidance was offered. When guidance was given, only students in the symbols-first condition improved their place value knowledge. Students who received instruction increased their understanding of regrouping, irrespective of representational sequence. No effects were found for iterative sequencing of concrete and abstract representations. Practical implications for teaching mathematics with manipulatives are considered.",
"title": ""
},
{
"docid": "3e5e7e38068da120639c3fcc80227bf8",
"text": "The ferric reducing antioxidant power (FRAP) assay was recently adapted to a microplate format. However, microplate-based FRAP (mFRAP) assays are affected by sample volume and composition. This work describes a calibration process for mFRAP assays which yields data free of volume effects. From the results, the molar absorptivity (ε) for the mFRAP assay was 141,698 M(-1) cm(-1) for gallic acid, 49,328 M(-1) cm(-1) for ascorbic acid, and 21,606 M(-1) cm(-1) for ammonium ferrous sulphate. The significance of ε (M(-1) cm(-1)) is discussed in relation to mFRAP assay sensitivity, minimum detectable concentration, and the dimensionless FRAP-value. Gallic acid showed 6.6 mol of Fe(2+) equivalents compared to 2.3 mol of Fe(+2) equivalents for ascorbic acid. Application of the mFRAP assay to Manuka honey samples (rated 5+, 10+, 15+, and 18+ Unique Manuka Factor; UMF) showed that FRAP values (0.54-0.76 mmol Fe(2+) per 100g honey) were strongly correlated with UMF ratings (R(2)=0.977) and total phenols content (R(2) = 0.982)whilst the UMF rating was correlated with the total phenols (R(2) = 0.999). In conclusion, mFRAP assay results were successfully standardised to yield data corresponding to 1-cm spectrophotometer which is useful for quality assurance purposes. The antioxidant capacity of Manuka honey was found to be directly related to the UMF rating.",
"title": ""
},
{
"docid": "3d01cd221fc0cfadf93d1b7295a22dad",
"text": "The multiplication of a sparse matrix by a dense vector (SpMV) is a centerpiece of scientific computing applications: it is the essential kernel for the solution of sparse linear systems and sparse eigenvalue problems by iterative methods. The efficient implementation of the sparse matrix-vector multiplication is therefore crucial and has been the subject of an immense amount of research, with interest renewed with every major new trend in high-performance computing architectures. The introduction of General-Purpose Graphics Processing Units (GPGPUs) is no exception, and many articles have been devoted to this problem.\n With this article, we provide a review of the techniques for implementing the SpMV kernel on GPGPUs that have appeared in the literature of the last few years. We discuss the issues and tradeoffs that have been encountered by the various researchers, and a list of solutions, organized in categories according to common features. We also provide a performance comparison across different GPGPU models and on a set of test matrices coming from various application domains.",
"title": ""
},
{
"docid": "eb4cac4ac288bc65df70f906b674ceb5",
"text": "LPWAN (Low Power Wide Area Networks) technologies have been attracting attention continuously in IoT (Internet of Things). LoRaWAN is present on the market as a LPWAN technology and it has features such as low power consumption, low transceiver chip cost and wide coverage area. In the LoRaWAN, end devices must perform a join procedure for participating in the network. Attackers could exploit the join procedure because it has vulnerability in terms of security. Replay attack is a method of exploiting the vulnerability in the join procedure. In this paper, we propose a attack scenario and a countermeasure against replay attack that may occur in the join request transfer process.",
"title": ""
},
{
"docid": "5d2c1095a34ee582f490f4b0392a3da0",
"text": "We study the problem of online learning to re-rank, where users provide feedback to improve the quality of displayed lists. Learning to rank has been traditionally studied in two settings. In the offline setting, rankers are typically learned from relevance labels of judges. These approaches have become the industry standard. However, they lack exploration, and thus are limited by the information content of offline data. In the online setting, an algorithm can propose a list and learn from the feedback on it in a sequential fashion. Bandit algorithms developed for this setting actively experiment, and in this way overcome the biases of offline data. But they also tend to ignore offline data, which results in a high initial cost of exploration. We propose BubbleRank, a bandit algorithm for re-ranking that combines the strengths of both settings. The algorithm starts with an initial base list and improves it gradually by swapping higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive numerical experiments on a large real-world click dataset.",
"title": ""
},
{
"docid": "2aabe5c6f1ccb8dfd241f0c208609738",
"text": "Exposing the weaknesses of neural models is crucial for improving their performance and robustness in real-world applications. One common approach is to examine how input perturbations affect the output. Our analysis takes this to an extreme on natural language processing tasks by removing as many words as possible from the input without changing the model prediction. For question answering and natural language inference, this often reduces the inputs to just one or two words, while model confidence remains largely unchanged. This is an undesireable behavior: the model gets the Right Answer for the Wrong Reason (RAWR). We introduce a simple training technique that mitigates this problem while maintaining performance on regular examples.",
"title": ""
},
{
"docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b",
"text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.",
"title": ""
},
{
"docid": "c70f8bd719642ed818efc5387ffb6b55",
"text": "In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it “Draw and Discard” because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of “Draw and Discard” and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework’s viability in practical deployments. We believe our framework is the first deployed distributed machine learning approach that operates in the local privacy model.",
"title": ""
},
{
"docid": "1fa6e8947e8bac6d0c185b2462eebb51",
"text": "In this study, a compact design of a highly efficient, and a high luminosity light-emitting diode (LED)-based visible light communications system is presented, which is capable of providing standard room illumination levels and also widecoverage Ethernet 10BASE-T optical wireless downlink communications over a distance of 2.3 m using commercial white light phosphor LEDs. The measured signal-to-noise ratio of the designed Ethernet system is >45 dB, thus allowing error-free communications with both on–off keying non-return zero and differential Manchester-coded modulation schemes at 10 Mbps. The uplink has been provided via a wireless infra-red link. A comparative study of a point-to-point wired local area network (LAN) and the optical wireless link confirms no discernible differences between them. The design of the transmitter is also shown to be scalable, with the frequency response for driving 25 LEDs being almost the same as driving a single LED. LED driving units are designed to match with the Ethernet sockets (RJ45) that conform to the existing LAN infrastructures (building and portable devices).",
"title": ""
},
{
"docid": "98e78d8fb047140a73f2a43cbe4a1c74",
"text": "Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7× speedup over the standard BWA-MEM sequence aligner running on a 56-thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12× and area by 5.6×.",
"title": ""
}
] |
scidocsrr
|
f7da70def48ed87aa37b7e169aa4f458
|
A Practitioners' Guide to Transfer Learning for Text Classification using Convolutional Neural Networks
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "091279f6b95594f9418591264d0d7e3c",
"text": "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several offthe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively). Appearing in Proceedings of the 14 International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors.",
"title": ""
},
{
"docid": "a9d0b367d4507bbcee55f4f25071f12e",
"text": "The goal of sentence and document modeling is to accurately represent the meaning of sentences and documents for various Natural Language Processing tasks. In this work, we present Dependency Sensitive Convolutional Neural Networks (DSCNN) as a generalpurpose classification system for both sentences and documents. DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long ShortTerm Memory networks and subsequently extracting features with convolution operators. Compared with existing recursive neural models with tree structures, DSCNN does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentencelevel tasks. Moreover, unlike other CNNbased models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document. Experiment results demonstrate that our approach is achieving state-ofthe-art performance on several tasks, including sentiment analysis, question type classification, and subjectivity classification.",
"title": ""
}
] |
[
{
"docid": "b5c7b9f1f57d3d79d3fc8a97eef16331",
"text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.",
"title": ""
},
{
"docid": "22992fe4908ebcf8ae9f22f3ea2d5a27",
"text": "This paper contains a comparison of common, simple thresholding methods. Basic thresholding, two-band thresholding, optimal thresholding (Calvard Riddler), adaptive thresholding, and p-tile thresholding is compared. The different thresholding methods has been implemented in the programming language c, using the image analysis library Xite. The program sources should accompany this paper. 1 Methods of thresholding Basic thresholding. Basic thresholding is done by visiting each pixel site in the image, and set the pixel to maximum value if its value is above or equal to a given threshold value and to the minimum value if the threshold value is below the pixels value. Basic thresholding is often used as a step in other thresholding algorithms. Implemented by the function threshold in thresholding.h Band thresholding. Band thresholding is similar to basic thresholding, but has two threshold values, and set the pixel site to maximum value if the pixels intensity value is between or at the threshold values, else it it set to minimum. Implemented by the function bandthresholding2 in thresholding.h P-tile thresholding. P-tile is a method for choosing the threshold value to input to the “basic thresholding” algorithm. P-tile means “Percentile”, and the threshold is chosen to be the intensity value where the cumulative sum of pixel intensities is closest to the percentile. Implemented by the function ptileThreshold in thresholding.h Optimal thresholding. Optimal thresholding selects a threshold value that is statistically optimal, based on the contents of the image. Algorithm, due to Calvard and Riddler: http://www.ifi.uio.no/forskning/grupper/dsb/Programvare/Xite/",
"title": ""
},
{
"docid": "1b7d2588cfa229aa3b2501a576be8cf2",
"text": "Hedonia (seeking pleasure and comfort) and eudaimonia (seeking to use and develop the best in oneself) are often seen as opposing pursuits, yet each may contribute to well-being in different ways. We conducted four studies (two correlational, one experience-sampling, and one intervention study) to determine outcomes associated with activities motivated by hedonic and eudaimonic aims. Overall, results indicated that: between persons (at the trait level) and within persons (at the momentary state level), hedonic pursuits related more to positive affect and carefreeness, while eudaimonic pursuits related more to meaning; between persons, eudaimonia related more to elevating experience (awe, inspiration, and sense of connection with a greater whole); within persons, hedonia related more negatively to negative affect; between and within persons, both pursuits related equally to vitality; and both pursuits showed some links with life satisfaction, though hedonia’s links were more frequent. People whose lives were high in both eudaimonia and hedonia had: higher degrees of most well-being variables than people whose lives were low in both pursuits (but did not differ in negative affect or carefreeness); higher positive affect and carefreeness than predominantly eudaimonic individuals; and higher meaning, elevating experience, and vitality than predominantly hedonic individuals. In the intervention study, hedonia produced more well-being benefits at short-term follow-up, while eudaimonia produced more at 3-month follow-up. The findings show that hedonia and eudaimonia occupy both overlapping and distinct niches within a complete picture of wellbeing, and their combination may be associated with the greatest well-being.",
"title": ""
},
{
"docid": "e1239202ebf9b2576344116e72e63a1a",
"text": "urgent need to promote Chinese in this paper we will raise the significance of keyword extraction using a new PAT-treebased approach, which is efficient in automatic keyword extraction from a set of relevant Chinese documents. This approach has been successfully applied in several IR researches, such as document classification, book indexing and relevance feedback. Many Chinese language processing applications therefore step ahead from character level to word/phrase level,",
"title": ""
},
{
"docid": "2366ab0736d4d88cd61a578b9287f9f5",
"text": "Scientific curiosity and fascination have played a key role in human research with psychedelics along with the hope that perceptual alterations and heightened insight could benefit well-being and play a role in the treatment of various neuropsychiatric disorders. These motivations need to be tempered by a realistic assessment of the hurdles to be cleared for therapeutic use. Development of a psychedelic drug for treatment of a serious psychiatric disorder presents substantial although not insurmountable challenges. While the varied psychedelic agents described in this chapter share some properties, they have a range of pharmacologic effects that are reflected in the gradation in intensity of hallucinogenic effects from the classical agents to DMT, MDMA, ketamine, dextromethorphan and new drugs with activity in the serotonergic system. The common link seems to be serotonergic effects modulated by NMDA and other neurotransmitter effects. The range of hallucinogens suggest that they are distinct pharmacologic agents and will not be equally safe or effective in therapeutic targets. Newly synthesized specific and selective agents modeled on the legacy agents may be worth considering. Defining therapeutic targets that represent unmet medical need, addressing market and commercial issues, and finding treatment settings to safely test and use such drugs make the human testing of psychedelics not only interesting but also very challenging. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.",
"title": ""
},
{
"docid": "7a945183a38a751052f5bfc80d3d3ff6",
"text": "It is time to reconsider unifying logic and memory. Since most of the transistors on this merged chip will be devoted to memory, it is called 'intelligent RAM'. IRAM is attractive because the gigabit DRAM chip has enough transistors for both a powerful processor and a memory big enough to contain whole programs and data sets. It contains 1024 memory blocks each 1kb wide. It needs more metal layers to accelerate the long lines of 600mm/sup 2/ chips. It may require faster transistors for the high-speed interface of synchronous DRAM. Potential advantages of IRAM include lower memory latency, higher memory bandwidth, lower system power, adjustable memory width and size, and less board space. Challenges for IRAM include high chip yield given processors have not been repairable via redundancy, high memory retention rates given processors usually need higher power than DRAMs, and a fast processor given logic is slower in a DRAM process.",
"title": ""
},
{
"docid": "e2a1ff393ad57ebaa9f3631e7910bab6",
"text": "We apply principles and techniques of recommendation systems to develop a predictive model of customers’ restaurant ratings. Using Yelp’s dataset, we extract collaborative and content based features to identify customer and restaurant profiles. In particular, we implement singular value decomposition, hybrid cascade of K-nearest neighbor clustering, weighted bi-partite graph projection, and several other learning algorithms. Using Root metrics Mean Squared Error and Mean Absolute Error, we then evaluate and compare the algorithms’ performances.",
"title": ""
},
{
"docid": "905027f065ca2efac792e4ec37e8e07b",
"text": "This case, written on the basis of published sources, concerns the decision facing management of Starbucks Canada about how to implement mobile payments. While Starbucks has currently been using a mobile app to accept payments through their proprietary Starbucks card, rival Tim Hortons has recently introduced a more advanced mobile payments solution and the company now has to consider its next moves. The case reviews various aspects of mobile payments technology and platforms that must be understood to make a decision about the best direction for Starbucks Canada.",
"title": ""
},
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
},
{
"docid": "30fb0e394f6c4bf079642cd492229b67",
"text": "Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act(CALEA), use a standard interface provided in network switches.\n This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks.",
"title": ""
},
{
"docid": "8b4fbc7fd8f41200731562a92a0c80ce",
"text": "The problem of recognizing mathematical expressions differs significantly from the recognition of standard prose. While in prose significant constraints can be put on the interpretation of a character by the characters immediately preceding and following it, few such simple constraints are present in a mathematical expression. In order to make the problem tractable, effective methods of recognizing mathematical expressions will need to put intelligent constraints on the possible interpretations. The authors present preliminary results on a system for the recognition of both handwritten and typeset mathematical expressions. While previous systems perform character recognition out of context, the current system maintains ambiguity of the characters until context can be used to disambiguate the interpretatiom In addition, the system limits the number of potentially valid interpretations by decomposing the expressions into a sequence of compatible convex regions. The system uses A-star to search for the best possible interpretation of an expression. We provide a new lower bound estimate on the cost to goal that improves performance significantly.",
"title": ""
},
{
"docid": "eb96cd38e634ddb298063dbc26163f52",
"text": "A good representation for arbitrarily complicated data should have the capability of semantic generation, clustering and reconstruction. Previous research has already achieved impressive performance on either one. This paper aims at learning a disentangled representation effective for all of them in an unsupervised way. To achieve all the three tasks together, we learn the forward and inverse mapping between data and representation on the basis of a symmetric adversarial process. In theory, we minimize the upper bound of the two conditional entropy loss between the latent variables and the observations together to achieve the cycle consistency. The newly proposed RepGAN is tested on MNIST, fashionMNIST, CelebA, and SVHN datasets to perform unsupervised or semi-supervised classification, generation and reconstruction tasks. The result demonstrates that RepGAN is able to learn a useful and competitive representation. To the author’s knowledge, our work is the first one to achieve both a high unsupervised classification accuracy and low reconstruction error on MNIST.",
"title": ""
},
{
"docid": "c3df0da617368c2472c76a6c95366338",
"text": "The infinitary propositional logic of here-and-there is important for the theory of answer set programming in view of its relation to strongly equivalent transformations of logic programs. We know a formal system axiomatizing this logic exists, but a proof in that system may include infinitely many formulas. In this note we describe a relationship between the validity of infinitary formulas in the logic of here-and-there and the provability of formulas in some finite deductive systems. This relationship allows us to use finite proofs to justify the validity of infinitary formulas.",
"title": ""
},
{
"docid": "f77107a84778699e088b94c1a75bfd78",
"text": "Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the \"state instability\" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.",
"title": ""
},
{
"docid": "ba0481ae973970f96f7bf7b1a5461f16",
"text": "WEP is a protocol for securing wireless networks. In the past years, many attacks on WEP have been published, totally breaking WEP’s security. This thesis summarizes all major attacks on WEP. Additionally a new attack, the PTW attack, is introduced, which was partially developed by the author of this document. Some advanced versions of the PTW attack which are more suiteable in certain environments are described as well. Currently, the PTW attack is fastest publicly known key recovery attack against WEP protected networks.",
"title": ""
},
{
"docid": "e79db51ac85ceafba66dddd5c038fbdf",
"text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.",
"title": ""
},
{
"docid": "ff8fd8bebb7e86b8d636ae528901b57f",
"text": "The ICH quality vision introduced the concept of quality by design (QbD), which requires a greater understanding of the raw material attributes, of process parameters, of their variability and their interactions. Microcrystalline cellulose (MCC) is one of the most important tableting excipients thanks to its outstanding dry binding properties, enabling the manufacture of tablets by direct compression (DC). DC remains the most economical technique to produce large batches of tablets, however its efficacy is directly impacted by the raw material attributes. Therefore excipients' variability and their impact on drug product performance need to be thoroughly understood. To help with this process, this review article gathers prior knowledge on MCC, focuses on its use in DC and lists some of its potential critical material attributes (CMAs).",
"title": ""
},
{
"docid": "22bb6af742b845dea702453b6b14ef3a",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "a87e49bd4a49f35099171b89d278c4d9",
"text": "Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.",
"title": ""
},
{
"docid": "878d0072a8881fe010f403a30f758725",
"text": "This paper reviews the current status of Learning Analytics with special focus on their application in Serious Games. After presenting the advantages of incorporating Learning Analytics into game-based learning applications, different aspects regarding the integration process including modeling, tracing, aggregation, visualisation, analysis and employment of gameplay data are discussed. Associated challenges in this field as well as examples of best practices are also examined.",
"title": ""
}
] |
scidocsrr
|
eef6fdb81d07ee3c02cb0d082b02b290
|
A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern
|
[
{
"docid": "641f8ac3567d543dd5df40a21629fbd7",
"text": "Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.",
"title": ""
}
] |
[
{
"docid": "668e72cfb7f1dca5b097ba7df01008b0",
"text": "Detecting PE malware files is now commonly approached using statistical and machine learning models. While these models commonly use features extracted from the structure of PE files, we propose that icons from these files can also help better predict malware. We propose a new machine learning approach to extract information from icons. Our proposed approach consists of two steps: 1) extracting icon features using summary statics, a histogram of gradients (HOG), and a convolutional autoencoder, 2) clustering icons based on the extracted icon features. Using publicly available data and by using machine learning experiments, we show our proposed icon clusters significantly boost the efficacy of malware prediction models. In particular, our experiments show an average accuracy increase of 10 percent when icon clusters are used in the prediction model.",
"title": ""
},
{
"docid": "c4f706ff9ceb514e101641a816ba7662",
"text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classication systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from dierent classes are further apart, resulting in statistically signicant improvement when compared to other approaches on three datasets from two dierent domains.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "7190c91917d1e1280010c66139837568",
"text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.",
"title": ""
},
{
"docid": "64cefd949f61afe81fbbb9ca1159dd4a",
"text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR",
"title": ""
},
{
"docid": "419f6e534c04e169a998865f71ee9488",
"text": "Stroma in the tumor microenvironment plays a critical role in cancer progression, but how it promotes metastasis is poorly understood. Exosomes are small vesicles secreted by many cell types and enable a potent mode of intercellular communication. Here, we report that fibroblast-secreted exosomes promote breast cancer cell (BCC) protrusive activity and motility via Wnt-planar cell polarity (PCP) signaling. We show that exosome-stimulated BCC protrusions display mutually exclusive localization of the core PCP complexes, Fzd-Dvl and Vangl-Pk. In orthotopic mouse models of breast cancer, coinjection of BCCs with fibroblasts dramatically enhances metastasis that is dependent on PCP signaling in BCCs and the exosome component, Cd81 in fibroblasts. Moreover, we demonstrate that trafficking in BCCs promotes tethering of autocrine Wnt11 to fibroblast-derived exosomes. This work reveals an intercellular communication pathway whereby fibroblast exosomes mobilize autocrine Wnt-PCP signaling to drive BCC invasive behavior.",
"title": ""
},
{
"docid": "b6303ae2b77ac5c187694d5320ef65ff",
"text": "Mechanisms for continuously changing or shifting a system's attack surface are emerging as game-changers in cyber security. In this paper, we propose a novel defense mechanism for protecting the identity of nodes in Mobile Ad Hoc Networks and defeat the attacker's reconnaissance efforts. The proposed mechanism turns a classical attack mechanism - Sybil - into an effective defense mechanism, with legitimate nodes periodically changing their virtual identity in order to increase the uncertainty for the attacker. To preserve communication among legitimate nodes, we modify the network layer by introducing (i) a translation service for mapping virtual identities to real identities; (ii) a protocol for propagating updates of a node's virtual identity to all legitimate nodes; and (iii) a mechanism for legitimate nodes to securely join the network. We show that the proposed approach is robust to different types of attacks, and also show that the overhead introduced by the update protocol can be controlled by tuning the update frequency.",
"title": ""
},
{
"docid": "7a8979f96411ef37c079d85c77c03bac",
"text": "Ankle-foot orthoses (AFOs) are orthotic devices that support the movement of the ankles of disabled people, for example, those suffering from hemiplegia or peroneal nerve palsy. We have developed an intelligently controllable AFO (i-AFO) in which the ankle torque is controlled by a compact magnetorheological fluid brake. Gait-control tests with the i-AFO were performed for a patient with flaccid paralysis of the ankles, who has difficulty in voluntary movement of the peripheral part of the inferior limb, and physical limitations on his ankles. By using the i-AFO, his gait control was improved by prevention of drop foot in the swing phase and by forward promotion in the stance phase.",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "2dc261ab24914dd3f865b8ede5b71be9",
"text": "Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [16]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014.",
"title": ""
},
{
"docid": "4804b3e0b8c2633ab0949bd98f900bb5",
"text": "Secure Simple Pairing (SSP), a characteristic of the Bluetooth Core Version 2.1 specification was build to address two foremost concerns amongst the Bluetooth user community: security and simplicity of the pairing process. It utilizes Elliptic Curve Diffie-Hellmen (ECDH) protocol for generating keys for the first time in Bluetooth pairing. It provides the security properties known session key security, forward security, resistance to key-compromise impersonation attack and to unknown key-share attack, key control. This paper presents the simulation and security analysis of Bluetooth pairing protocol for numeric comparison using ECDH in NS2. The protocol also employs SAGEMATH for cryptographic functions.",
"title": ""
},
{
"docid": "1499fd10ee703afd1d5b3ec35defa26b",
"text": "It is challenging to analyze the aerial locomotion of bats because of the complicated and intricate relationship between their morphology and flight capabilities. Developing a biologically inspired bat robot would yield insight into how bats control their body attitude and position through the complex interaction of nonlinear forces (e.g., aerodynamic) and their intricate musculoskeletal mechanism. The current work introduces a biologically inspired soft robot called Bat Bot (B2). The overall system is a flapping machine with 5 Degrees of Actuation (DoA). This work reports on some of the preliminary untethered flights of B2. B2 has a nontrivial morphology and it has been designed after examining several biological bats. Key DoAs, which contribute significantly to bat flight, are picked and incorporated in B2's flight mechanism design. These DoAs are: 1) forelimb flapping motion, 2) forelimb mediolateral motion (folding and unfolding) and 3) hindlimb dorsoventral motion (upward and downward movement).",
"title": ""
},
{
"docid": "f9ee82dcf1cce6d41a7f106436ee3a7d",
"text": "The Automatic Identification System (AIS) is based on VHF radio transmissions of ships' identity, position, speed and heading, in addition to other key parameters. In 2004, the Norwegian Defence Research Establishment (FFI) undertook studies to evaluate if the AIS signals could be detected in low Earth orbit. Since then, the interest in Space-Based AIS reception has grown significantly, and both public and private sector organizations have established programs to study the issue, and demonstrate such a capability in orbit. FFI is conducting two such programs. The objective of the first program was to launch a nano-satellite equipped with an AIS receiver into a near polar orbit, to demonstrate Space-Based AIS reception at high latitudes. The satellite was launched from India 12th July 2010. Even though the satellite has not finished commissioning, the receiver is operated with real-time transmission of received AIS data to the Norwegian Coastal Administration. The second program is an ESA-funded project to operate an AIS receiver on the European Columbus module of the International Space Station. Mounting of the equipment, the NORAIS receiver, was completed in April 2010. Currently, the AIS receiver has operated for more than three months, picking up several million AIS messages from more than 60 000 ship identities. In this paper, we will present experience gained with the space-based AIS systems, highlight aspects of tracking ships throughout their voyage, and comment on possible contributions to port security.",
"title": ""
},
{
"docid": "b954fa908229bdc0e514b2e21246b064",
"text": "The study of small-size animal models, such as the roundworm C. elegans, has provided great insight into several in vivo biological processes, extending from cell apoptosis to neural network computing. The physical manipulation of this micron-sized worm has always been a challenging task. Here, we discuss the applications, capabilities and future directions of a new family of worm manipulation tools, the 'worm chips'. Worm chips are microfabricated devices capable of precisely manipulating single worms or a population of worms and their environment. Worm chips pose a paradigm shift in current methodologies as they are capable of handling live worms in an automated fashion, opening up a new direction in in vivo small-size organism studies.",
"title": ""
},
{
"docid": "94c47638f35abc67c366ceb871898b86",
"text": "The past few years have seen a growing interest in the application\" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation[7,12], automatic surveillance[9], aerial cartography\\l0,l3], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed. Mechanism and Constraints Edge-based stereo uses operators to reduce an image to a depiction of its intensity boundaries, which are then correlated. Area-based stereo uses area windowing mechanisms to measure local statistical properties of the intensities, which can then be correlated. The system described here deals, initially, with the former, edges, because of the: a) reduced combinatorics (there are fewer edges than pixels), b) greater accuracy (edges can be positioned to sub-pixel precision, while area positioning precision is inversely proportional to window size, and considerably poorer), and c) more realistic in variance assumptions (area-based analysis presupposes that the photometric properties of a scene arc invariant to viewing position, while edge-based analysis works with the assumption that it is the geometric properties that are invariant to viewing position). Edges are found by a convolution operator They are located at positions in the image where a change in sign of second difference in intensity occurs. A particular operator, the one described here being 1 by 7 pixels in size, measures the directional first difference in intensity at each pixel' Second differences are computed from these, and changes in sign of these second differences are used to interpolate sero crossings (i.e. peaks in first difference). Certain local properties other than position are measured and associated with each edge contrast, image slope, and intensity to either side and links are kept to nearest neighbours above, below, and to the sides. It is these properties that define an edge and provide the basis for the correlation (see the discussions in [1,2]). The correlation is & search for edge correspondence between images Fig. 2 shows the edges found in the two images of fig. 1 with the second difference operator (note, all stereo pairs in this paper are drawn for cross-eyed viewing) Although the operator works in both horizontal and vertical directions, it only allows correlation on edges whose horizontal gradient lies above the noise one standard deviation of the first difference in intensity With no prior knowledge of the viewing situation, one could have any edge in one image matching any edge in the other. By constraining the geometry of the cameras during picture taking one can vastly limit the computation that is required in determining corresponding edges in the two images. Consider fig. 3. If two balanced, equal focal length cameras are arranged with axes parallel, then they can be conceived of as sharing a single common image plane. Any point in the scene will project to two points on that joint image plane (one through each of the two lens centers), the connection of which will produce a line parallel to the baseline between the cameras. Thus corresponding edges in the two images must lie along the tame line in the joint image plane This line is termed an epipolar line. If the baseline between the two cameras happens to be parallel to an axis of the cameras, then the correlation only need consider edges lying along corresponding lines parallel to that axis in the two images. Fig. 3 indicates this camera geometry a geometry which produces rectified The edge operator is simple, basically one dimensional, and is noteworthy only in that it it fast and fairly effective.",
"title": ""
},
{
"docid": "26fef7add5f873aa7ec08bff979ef77c",
"text": "Citation: Nermin Kamal., et al. “Restorability of Teeth: A Numerical Simplified Restorative Decision-Making Chart”. EC Dental Science 17.6 (2018): 961-967. Abstract A decision to extract or to keep a tooth was always a debatable matter in dentistry. Each dental specialty has its own perspective in that regards. Although, real life in the dental clinic showed that the decision is always multi-disciplinary, and that full awareness of all aspects should be there in order to reach to a reliable outcome. This article presents a simple evidence-based clinical chart for the judgment of restorability of teeth for better treatment planning.",
"title": ""
},
{
"docid": "8cff1a60fd0eeb60924333be5641ca83",
"text": "Since Wireless Sensor Networks (WSNs) are composed of a set of sensor nodes that limit resource constraints such as energy constraints, energy consumption in WSNs is one of the challenges of these networks. One of the solutions to reduce energy consumption in WSNs is to use clustering. In clustering, cluster members send their data to their Cluster Head (CH), and the CH after collecting the data, sends them to the Base Station (BS). In clustering, choosing CHs is very important; so many methods have proposed to choose the CH. In this study, a hesitant fuzzy method with three input parameters namely, remaining energy, distance to the BS, distance to the center of cluster is proposed for efficient cluster head selection in WSNs. We define different scenarios and simulate them, then investigate the results of simulation.",
"title": ""
},
{
"docid": "9c74b77e79217602bb21a36a5787ed59",
"text": "Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy.",
"title": ""
},
{
"docid": "1e25480ef6bd5974fcd806aac7169298",
"text": "Alphabetical ciphers are being used since centuries for inducing confusion in messages, but there are some drawbacks that are associated with Classical alphabetic techniques like concealment of key and plaintext. Here in this paper we will suggest an encryption technique that is a blend of both classical encryption as well as modern technique, this hybrid technique will be superior in terms of security than average Classical ciphers.",
"title": ""
},
{
"docid": "e0eded1237c635af3c762f6bbe5d1b26",
"text": "Locating boundaries between coherent and/or repetitive segments of a time series is a challenging problem pervading many scientific domains. In this paper we propose an unsupervised method for boundary detection, combining three basic principles: novelty, homogeneity, and repetition. In particular, the method uses what we call structure features, a representation encapsulating both local and global properties of a time series. We demonstrate the usefulness of our approach in detecting music structure boundaries, a task that has received much attention in recent years and for which exist several benchmark datasets and publicly available annotations. We find our method to significantly outperform the best accuracies published so far. Importantly, our boundary approach is generic, thus being applicable to a wide range of time series beyond the music and audio domains.",
"title": ""
}
] |
scidocsrr
|
6e3a1a74ece7e0c49866c42f870f1d8d
|
Data Integration: The Current Status and the Way Forward
|
[
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "c6abeae6e9287f04b472595a47e974ad",
"text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
}
] |
[
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "86ededf9b452bbc51117f5a117247b51",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "831b153045d9afc8f92336b3ba8019c6",
"text": "The progress in the field of electronics and technology as well as the processing of signals coupled with advance in the use of computer technology has given the opportunity to record and analyze the bio-electric signals from the human body in real time that requires dealing with many challenges according to the nature of the signal and its frequency. This could be up to 1 kHz, in addition to the need to transfer data from more than one channel at the same time. Moreover, another challenge is a high sensitivity and low noise measurements of the acquired bio-electric signals which may be tens of micro volts in amplitude. For these reasons, a low power wireless Electromyography (EMG) data transfer system is designed in order to meet these challenging demands. In this work, we are able to develop an EMG analogue signal processing hardware, along with computer based supporting software. In the development of the EMG analogue signal processing hardware, many important issues have been addressed. Some of these issues include noise and artifact problems, as well as the bias DC current. The computer based software enables the user to analyze the collected EMG data and plot them on graphs for visual decision making. The work accomplished in this study enables users to use the surface EMG device for recording EMG signals for various purposes in movement analysis in medical diagnosis, rehabilitation sports medicine and ergonomics. Results revealed that the proposed system transmit and receive the signal without any losing in the information of signals.",
"title": ""
},
{
"docid": "835b7a2b3d9c457a962e6b432665c7ce",
"text": "In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. Our proposed GAN allows us to augment face datasets by generating both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with augmented datasets can indeed increase the accuracy of face recognition models as compared with models trained with real images alone.",
"title": ""
},
{
"docid": "495be81dda82d3e4d90a34b6716acf39",
"text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.",
"title": ""
},
{
"docid": "6fdeeea1714d484c596468aea053848f",
"text": "Standard slow start does not work well under large bandwidthdelay product (BDP) networks. We find two causes of this problem in existing three popular operating systems, Linux, FreeBSD and Windows XP. The first cause is that because of the exponential increase of cwnd during standard slow start, heavy packet losses occur. Recovering from heavy packet losses puts extremely high load on end systems which renders the end systems completely unresponsive for a long time, resulting in a long blackout period of no transmission. This problem commonly occurs with the three operating systems. The second cause is that some of proprietary protocol optimizations applied for slow start by these operating systems to relieve the system load happen to slow down the loss recovery followed by slow start. To remedy this problem, we propose a new slow start algorithm, called Hybrid Start (HyStart) that finds a “safe” exit point of slow start at which slow start can finish and safely move to congestion avoidance without causing any heavy packet losses. HyStart uses ACK trains and RTT delay samples to detect whether (1) the forward path is congested or (2) the current size of congestion window has reached the available capacity of the forward path. HyStart is a plug-in to the TCP sender and does not require any change in TCP receivers. We implemented HyStart for TCP-NewReno and TCP-SACK in Linux and compare its performance with five different slow start schemes with the TCP receivers of the three different operating systems in the Internet and also in the lab testbeds. Our results indicate that HyStart works consistently well under diverse network environments including asymmetric links and high and low BDP networks. Especially with different operating system receivers (Windows XP and FreeBSD), HyStart improves the start-up throughput of TCP more than 2 to 3 times.",
"title": ""
},
{
"docid": "4e85039497c60f8241d598628790f543",
"text": "Knowledge management (KM) is a dominant theme in the behavior of contemporary organizations. While KM has been extensively studied in developed economies, it is much less well understood in developing economies, notably those that are characterized by different social and cultural traditions to the mainstream of Western societies. This is notably the case in China. This chapter develops and tests a theoretical model that explains the impact of leadership style and interpersonal trust on the intention of information and knowledge workers in China to share their knowledge with their peers. All the hypotheses are supported, showing that both initiating structure and consideration have a significant effect on employees’ intention to share knowledge through trust building: 28.2% of the variance in employees’ intention to share knowledge is explained. The authors discuss the theoretical contributions of the chapter, identify future research opportunities, and highlight the implications for practicing managers. DOI: 10.4018/978-1-60566-920-5.ch009",
"title": ""
},
{
"docid": "da45568bf2ec4bfe32f927eb54e78816",
"text": "We explore controller input mappings for games using a deformable prototype that combines deformation gestures with standard button input. In study one, we tested discrete gestures using three simple games. We categorized the control schemes as binary (button only), action, and navigation, the latter two named based on the game mechanics mapped to the gestures. We found that the binary scheme performed the best, but gesture-based control schemes are stimulating and appealing. Results also suggest that the deformation gestures are best mapped to simple and natural tasks. In study two, we tested continuous gestures in a 3D racing game using the same control scheme categorization. Results were mostly consistent with study one but showed an improvement in performance and preference for the action control scheme.",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "375766c4ae473312c73e0487ab57acc8",
"text": "There are three reasons why the asymmetric crooked nose is one of the greatest challenges in rhinoplasty surgery. First, the complexity of the problem is not appreciated by the patient nor understood by the surgeon. Patients often see the obvious deviation of the nose, but not the distinct differences between the right and left sides. Surgeons fail to understand and to emphasize to the patient that each component of the nose is asymmetric. Second, these deformities can be improved, but rarely made flawless. For this reason, patients are told that the result will be all \"-er words,\" better, straighter, cuter, but no \"t-words,\" there is no perfect nor straight. Most surgeons fail to realize that these cases represent asymmetric noses on asymmetric faces with the variable of ipsilateral and contralateral deviations. Third, these cases demand a wide range of sophisticated surgical techniques, some of which have a minimal margin of error. This article offers an in-depth look at analysis, preoperative planning, and surgical techniques available for dealing with the asymmetric crooked nose.",
"title": ""
},
{
"docid": "5e6175d56150485d559d0c1a963e12b8",
"text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.",
"title": ""
},
{
"docid": "571a4de4ac93b26d55252dab86e2a0d3",
"text": "Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD.",
"title": ""
},
{
"docid": "97b212bb8fde4859e368941a4e84ba90",
"text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.",
"title": ""
},
{
"docid": "af0df66f001ffd9601ac3c89edf6af0f",
"text": "State-of-the-art speech recognition systems rely on fixed, handcrafted features such as mel-filterbanks to preprocess the waveform before the training pipeline. In this paper, we study end-toend systems trained directly from the raw waveform, building on two alternatives for trainable replacements of mel-filterbanks that use a convolutional architecture. The first one is inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015), and the second one by the scattering transform (Zeghidour et al., 2017). We propose two modifications to these architectures and systematically compare them to mel-filterbanks, on the Wall Street Journal dataset. The first modification is the addition of an instance normalization layer, which greatly improves on the gammatone-based trainable filterbanks and speeds up the training of the scattering-based filterbanks. The second one relates to the low-pass filter used in these approaches. These modifications consistently improve performances for both approaches, and remove the need for a careful initialization in scattering-based trainable filterbanks. In particular, we show a consistent improvement in word error rate of the trainable filterbanks relatively to comparable mel-filterbanks. It is the first time end-to-end models trained from the raw signal significantly outperform mel-filterbanks on a large vocabulary task under clean recording conditions.",
"title": ""
},
{
"docid": "a2f4005c681554cc422b11a6f5087793",
"text": "Emerged as salient in the recent home appliance consumer market is a new generation of home cleaning robot featuring the capability of Simultaneous Localization and Mapping (SLAM). SLAM allows a cleaning robot not only to selfoptimize its work paths for efficiency but also to self-recover from kidnappings for user convenience. By kidnapping, we mean that a robot is displaced, in the middle of cleaning, without its SLAM aware of where it moves to. This paper presents a vision-based kidnap recovery with SLAM for home cleaning robots, the first of its kind, using a wheel drop switch and an upwardlooking camera for low-cost applications. In particular, a camera with a wide-angle lens is adopted for a kidnapped robot to be able to recover its pose on a global map with only a single image. First, the kidnapping situation is effectively detected based on a wheel drop switch. Then, for S. Lee · S. Lee (B) School of Information and Communication Engineering and Department of Interaction Science, Sungkyunkwan University, Suwon, South Korea e-mail: lsh@ece.skku.ac.kr S. Lee e-mail: seongsu.lee@lge.com S. Lee · S. Baek Future IT Laboratory, LG Electronics Inc., Seoul, South Korea e-mail: seungmin2.baek@lge.com an efficient kidnap recovery, a coarse-to-fine approach to matching the image features detected with those associated with a large number of robot poses or nodes, built as a map in graph representation, is adopted. The pose ambiguity, e.g., due to symmetry is taken care of, if any. The final robot pose is obtained with high accuracy from the fine level of the coarse-to-fine hierarchy by fusing poses estimated from a chosen set of matching nodes. The proposed method was implemented as an embedded system with an ARM11 processor on a real commercial home cleaning robot and tested extensively. Experimental results show that the proposed method works well even in the situation in which the cleaning robot is suddenly kidnapped during the map building process.",
"title": ""
},
{
"docid": "b5b7bef8ec2d38bb2821dc380a3a49bf",
"text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.",
"title": ""
},
{
"docid": "cf8cdd70dde3f55ed097972be1d2fde7",
"text": "BACKGROUND\nText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.\n\n\nMETHODS\nWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.\n\n\nRESULTS\nPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.\n\n\nCONCLUSION\nWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm.",
"title": ""
},
{
"docid": "1b647a09085a41e66f8c1e3031793fed",
"text": "In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents.",
"title": ""
},
{
"docid": "7f2403a849690fb12a184ec67b0a2872",
"text": "Deep reinforcement learning achieves superhuman performance in a range of video game environments, but requires that a designer manually specify a reward function. It is often easier to provide demonstrations of a target behavior than to design a reward function describing that behavior. Inverse reinforcement learning (IRL) algorithms can infer a reward from demonstrations in low-dimensional continuous control environments, but there has been little work on applying IRL to high-dimensional video games. In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator. To stabilize training, we normalize the reward and increase the size of the discriminator training dataset. We additionally learn a low-dimensional state representation using a novel autoencoder architecture tuned for video game environments. This embedding is used as input to the reward network, improving the sample efficiency of expert demonstrations. Our method achieves high-level performance on the simple Catcher video game, substantially outperforming the CNN-AIRL baseline. We also score points on the Enduro Atari racing game, but do not match expert performance, highlighting the need for further work.",
"title": ""
}
] |
scidocsrr
|
b41ab3023e56f4e02ba43c74f2495827
|
Crystallize: An Immersive, Collaborative Game for Second Language Learning
|
[
{
"docid": "ae9d14cfbc20eff358ff71322f4cc8eb",
"text": "One of the key challenges of video game design is teaching new players how to play. Although game developers frequently use tutorials to teach game mechanics, little is known about how tutorials affect game learnability and player engagement. Seeking to estimate this value, we implemented eight tutorial designs in three video games of varying complexity and evaluated their effects on player engagement and retention. The results of our multivariate study of over 45,000 players show that the usefulness of tutorials depends greatly on game complexity. Although tutorials increased play time by as much as 29% in the most complex game, they did not significantly improve player engagement in the two simpler games. Our results suggest that investment in tutorials may not be justified for games with mechanics that can be discovered through experimentation.",
"title": ""
}
] |
[
{
"docid": "f66d26379c676880ed23e6eb580c3609",
"text": "Molecular mechanics force fields are widely used in computer-aided drug design for the study of drug candidates interacting with biological systems. In these simulations, the biological part is typically represented by a specialized biomolecular force field, while the drug is represented by a matching general (organic) force field. In order to apply these general force fields to an arbitrary drug-like molecule, functionality for assignment of atom types, parameters, and partial atomic charges is required. In the present article, algorithms for the assignment of parameters and charges for the CHARMM General Force Field (CGenFF) are presented. These algorithms rely on the existing parameters and charges that were determined as part of the parametrization of the force field. Bonded parameters are assigned based on the similarity between the atom types that define said parameters, while charges are determined using an extended bond-charge increment scheme. Charge increments were optimized to reproduce the charges on model compounds that were part of the parametrization of the force field. A \"penalty score\" is returned for every bonded parameter and charge, allowing the user to quickly and conveniently assess the quality of the force field representation of different parts of the compound of interest. Case studies are presented to clarify the functioning of the algorithms and the significance of their output data.",
"title": ""
},
{
"docid": "0c42c99a4d80edf11386909a2582459a",
"text": "Robustness or stability of feature selection techniques is a topic of recent interest, and is an important issue when selected feature subsets are subsequently analysed by domain experts to gain more insight into the problem modelled. In this work, we investigate the use of ensemble feature selection techniques, where multiple feature selection methods are combined to yield more robust results. We show that these techniques show great promise for high-dimensional domains with small sample sizes, and provide more robust feature subsets than a single feature selection technique. In addition, we also investigate the effect of ensemble feature selection techniques on classification performance, giving rise to a new model selection strategy.",
"title": ""
},
{
"docid": "52844cb9280029d5ddec869945b28be2",
"text": "In this work, a new fast dynamic community detection algorithm for large scale networks is presented. Most of the previous community detection algorithms are designed for static networks. However, large scale social networks are dynamic and evolve frequently over time. To quickly detect communities in dynamic large scale networks, we proposed dynamic modularity optimizer framework (DMO) that is constructed by modifying well-known static modularity based community detection algorithm. The proposed framework is tested using several different datasets. According to our results, community detection algorithms in the proposed framework perform better than static algorithms when large scale dynamic networks are considered.",
"title": ""
},
{
"docid": "7605ae0f6c5148195caa33c54e8e7a1b",
"text": "Recently Dutch government, as well as many other governments around the world, has digitized a major portion of its public services. With this development electronic services finally arrive at the transaction level. The risks of electronic services on the transactional level are more profound than at the informational level. The public needs to trust the integrity and ‘information management capacities’ of the government or other involved organizations, as well as trust the infrastructure and those managing the infrastructure. In this process, the individual citizen will have to decide to adopt the new electronic government services by weighing its benefits and risks. In this paper, we present a study which aims to identify the role of risk perception and trust in the intention to adopt government e-services. In January 2003, a sample of 238 persons completed a questionnaire. The questionnaire tapped people’s intention to adopt e-government electronic services. Based on previous research and theories on technology acceptance, the questionnaire measured perceived usefulness of e-services, risk perception, worry, perceived behavioural control, subjective norm, trust and experience with e-services. Structural equation modelling was used to further analyze the data (Amos) and to design a theoretical model predicting the individual’s intention to adopt e-services. This analysis showed that the perceived usefulness of electronic services in general is the main determinant of the intention to use e-government services. Risk perception, personal experience, perceived behavioural control and subjective norm were found to significantly predict the perceived usefulness of electronic services in general, while trust in e-government was the main determinant of the perceived usefulness of e-government services. 2006 Elsevier Ltd. All rights reserved. 0747-5632/$ see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2005.11.003 * Corresponding author. E-mail addresses: Margot.Kuttschreuter@utwente.nl (M. Kuttschreuter), J.M.Gutteling@utwente.nl (J.M. Gutteling). M. Horst et al. / Computers in Human Behavior 23 (2007) 1838–1852 1839",
"title": ""
},
{
"docid": "102ed07783d46a8ebadcad4b30ccb3c8",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "e9b7eba9f15440ec7112a1938fad1473",
"text": "Recovery is not a new concept within mental health, although in recent times, it has come to the forefront of the policy agenda. However, there is no universal definition of recovery, and it is a contested concept. The aim of this study was to examine the British literature relating to recovery in mental health. Three contributing groups are identified: service users, health care providers and policy makers. A review of the literature was conducted by accessing all relevant published texts. A search was conducted using these terms: 'recovery', 'schizophrenia', 'psychosis', 'mental illness' and 'mental health'. Over 170 papers were reviewed. A thematic analysis was conducted. Six main themes emerged, which were examined from the perspective of the stakeholder groups. The dominant themes were identity, the service provision agenda, the social domain, power and control, hope and optimism, risk and responsibility. Consensus was found around the belief that good quality care should be made available to service users to promote recovery both as inpatient or in the community. However, the manner in which recovery was defined and delivered differed between the groups.",
"title": ""
},
{
"docid": "31bd49d9287ceaead298c4543c5b3c53",
"text": "In this paper, an experimental self-teaching system capable of superimposing audio-visual information to support the process of learning to play the guitar is proposed. Different learning scenarios have been carefully designed according to diverse levels of experience and understanding and are presented in a simple way. Learners can select between representative numbers of scenarios and physically interact with the audio-visual information in a natural way. Audio-visual information can be placed anywhere on a physical space and multiple sound sources can be mixed to experiment with compositions and compilations. To assess the effectiveness of the system some initial evaluation is conducted. Finally conclusions and future work of the system are summarized. Categories: augmented reality, information visualisation, human-computer interaction, learning.",
"title": ""
},
{
"docid": "37d2671c9d89ce5a1c1957bd1490f944",
"text": "In some of object recognition problems, labeled data may not be available for all categories. Zero-shot learning utilizes auxiliary information (also called signatures) d escribing each category in order to find a classifier that can recognize samples from categories with no labeled instance . In this paper, we propose a novel semi-supervised zero-shot learning method that works on an embedding space corresponding to abstract deep visual features. We seek a linear transformation on signatures to map them onto the visual features, such that the mapped signatures of the seen classe s are close to labeled samples of the corresponding classes and unlabeled data are also close to the mapped signatures of one of the unseen classes. We use the idea that the rich deep visual features provide a representation space in whic h samples of each class are usually condensed in a cluster. The effectiveness of the proposed method is demonstrated through extensive experiments on four public benchmarks improving the state-of-the-art prediction accuracy on thr ee of them.",
"title": ""
},
{
"docid": "35f6a4ee2364aea9861b7606c8cb7d40",
"text": "The research on robust principal component analysis (RPCA) has been attracting much attention recently. The original RPCA model assumes sparse noise, and use the L1-norm to characterize the error term. In practice, however, the noise is much more complex and it is not appropriate to simply use a certainLp-norm for noise modeling. We propose a generative RPCA model under the Bayesian framework by modeling data noise as a mixture of Gaussians (MoG). The MoG is a universal approximator to continuous distributions and thus our model is able to fit a wide range of noises such as Laplacian, Gaussian, sparse noises and any combinations of them. A variational Bayes algorithm is presented to infer the posterior of the proposed model. All involved parameters can be recursively updated in closed form. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and background subtraction.",
"title": ""
},
{
"docid": "c7f6a99df60e96c98862e366c4bc3646",
"text": "Doppio is a reconfigurable smartwatch with two touch sensitive display faces. The orientation of the top relative to the base and how the top is attached to the base, creates a very large interaction space. We define and enumerate possible configurations, transitions, and manipulations in this space. Using a passive prototype, we conduct an exploratory study to probe how people might use this style of smartwatch interaction. With an instrumented prototype, we conduct a controlled experiment to evaluate the transition times between configurations and subjective preferences. We use the combined results of these two studies to generate a set of characteristics and design considerations for applying this interaction space to smartwatch applications. These considerations are illustrated with a proof-of-concept hardware prototype demonstrating how Doppio interactions can be used for notifications, private viewing, task switching, temporary information access, application launching, application modes, input, and sharing the top.",
"title": ""
},
{
"docid": "6bbc32ecaf54b9a51442f92edbc2604a",
"text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.",
"title": ""
},
{
"docid": "2dc23ce5b1773f12905ebace6ef221a5",
"text": "With the increasing demand for higher data rates and more reliable service capabilities for wireless devices, wireless service providers are facing an unprecedented challenge to overcome a global bandwidth shortage. Early global activities on beyond fourth-generation (B4G) and fifth-generation (5G) wireless communication systems suggest that millimeter-wave (mmWave) frequencies are very promising for future wireless communication networks due to the massive amount of raw bandwidth and potential multigigabit-per-second (Gb/s) data rates [1]?[3]. Both industry and academia have begun the exploration of the untapped mmWave frequency spectrum for future broadband mobile communication networks. In April 2014, the Brooklyn 5G Summit [4], sponsored by Nokia and the New York University (NYU) WIRELESS research center, drew global attention to mmWave communications and channel modeling. In July 2014, the IEEE 802.11 next-generation 60-GHz study group was formed to increase the data rates to over 20 Gb/s in the unlicensed 60-GHz frequency band while maintaining backward compatibility with the emerging IEEE 802.11ad wireless local area network (WLAN) standard [5].",
"title": ""
},
{
"docid": "34989468dace8410e9b7b68f0fd78a96",
"text": "A novel coplanar waveguide (CPW)-fed triband planar monopole antenna is presented for WLAN/WiMAX applications. The monopole antenna is printed on a substrate with two rectangular corners cut off. The radiator of the antenna is very compact with an area of only 3.5 × 17 mm2, on which two inverted-L slots are etched to achieve three radiating elements so as to produce three resonant modes for triband operation. With simple structure and small size, the measured and simulated results show that the proposed antenna has 10-dB impedance bandwidths of 120 MHz (2.39-2.51 GHz), 340 MHz (3.38-3.72 GHz), and 1450 MHz (4.79-6.24 GHz) to cover all the 2.4/5.2/5.8-GHz WLAN and the 3.5/5.5-GHz WiMAX bands, and good dipole-like radiation characteristics are obtained over the operating bands.",
"title": ""
},
{
"docid": "5a58ab9fe86a4d0693faacfc238fb35c",
"text": "Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "02d8ad18b07d08084764d124dc74a94c",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "46563eaa34dd45c861c774bd9f13d1b6",
"text": "The energy constraint is one of the inherent defects of the Wireless Sensor Networks (WSNs). How to prolong the lifespan of the network has attracted more and more attention. Numerous achievements have emerged successively recently. Among these mechanisms designing routing protocols is one of the most promising ones owing to the large amount of energy consumed for data transmission. The background and related works are described firstly in detail in this paper. Then a game model for selecting the Cluster Head is presented. Subsequently, a novel routing protocol named Game theory based Energy Efficient Clustering routing protocol (GEEC) is proposed. GEEC, which belongs to a kind of clustering routing protocols, adopts evolutionary game theory mechanism to achieve energy exhaust equilibrium as well as lifetime extension at the same time. Finally, extensive simulation experiments are conducted. The experimental results indicate that a significant improvement in energy balance as well as in energy conservation compared with other two kinds of well-known clustering routing protocols is achieved.",
"title": ""
},
{
"docid": "027a5da45d41ce5df40f6b342a9e4485",
"text": "GPipe is a scalable pipeline parallelism library that enables learning of giant deep neural networks. It partitions network layers across accelerators and pipelines execution to achieve high hardware utilization. It leverages recomputation to minimize activation memory usage. For example, using partitions over 8 accelerators, it is able to train networks that are 25× larger, demonstrating its scalability. It also guarantees that the computed gradients remain consistent regardless of the number of partitions. It achieves an almost linear speedup without any changes in the model parameters: when using 4× more accelerators, training the same model is up to 3.5× faster. We train a 557 million parameters AmoebaNet model and achieve a new state-ofthe-art 84.3% top-1 / 97.0% top-5 accuracy on ImageNet 2012 dataset. Finally, we use this learned model to finetune multiple popular image classification datasets and obtain competitive results, including pushing the CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%.",
"title": ""
},
{
"docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3",
"text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.",
"title": ""
}
] |
scidocsrr
|
f3ffbaafd9085526f906a7fb90ac3558
|
Fast camera calibration for the analysis of sport sequences
|
[
{
"docid": "cfadde3d2e6e1d6004e6440df8f12b5a",
"text": "We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses the line markings of the court for calibration and it can be applied to a variety of different sports since the geometric model of the court can be specified by the user. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture restrictions. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the following input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.",
"title": ""
}
] |
[
{
"docid": "36759b5da620f3b1c870c65e16aa2b44",
"text": "Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, and testing, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This foundational article presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements.",
"title": ""
},
{
"docid": "76cedf5536bd886b5838c2a5e027de79",
"text": "This article reports a meta-analysis of personality-academic performance relationships, based on the 5-factor model, in which cumulative sample sizes ranged to over 70,000. Most analyzed studies came from the tertiary level of education, but there were similar aggregate samples from secondary and tertiary education. There was a comparatively smaller sample derived from studies at the primary level. Academic performance was found to correlate significantly with Agreeableness, Conscientiousness, and Openness. Where tested, correlations between Conscientiousness and academic performance were largely independent of intelligence. When secondary academic performance was controlled for, Conscientiousness added as much to the prediction of tertiary academic performance as did intelligence. Strong evidence was found for moderators of correlations. Academic level (primary, secondary, or tertiary), average age of participant, and the interaction between academic level and age significantly moderated correlations with academic performance. Possible explanations for these moderator effects are discussed, and recommendations for future research are provided.",
"title": ""
},
{
"docid": "d5f43b7405e08627b7f0930cc1ddd99e",
"text": "Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or cost-effective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical information. We present our definition of CRDs, and describe a clone tracking system capable of producing CRDs from the output of different clone detection tools, notifying developers of modifications to clone regions, and supporting updates to the documented clone relationships. We evaluated the performance and usefulness of our approach across three clone detection tools and five subject systems, and the results indicate that CRDs are a practical and robust representation for tracking code clones in evolving software.",
"title": ""
},
{
"docid": "c2334008c6a07cbd3b3d89dc01ddc02d",
"text": "Four Cucumber mosaic virus (CMV) (CMV-HM 1–4) and nine Tomato mosaic virus (ToMV) (ToMV AH 1–9) isolates detected in tomato samples collected from different governorates in Egypt during 2014, were here characterized. According to the coat protein gene sequence and to the complete nucleotide sequence of total genomic RNA1, RNA2 and RNA3 of CMV-HM3 the new Egyptian isolates are related to members of the CMV subgroup IB. The nine ToMV Egyptian isolates were characterized by sequence analysis of the coat protein and the movement protein genes. All isolates were grouped within the same branch and showed high relatedness to all considered isolates (98–99%). Complete nucleotide sequence of total genomic RNA of ToMV AH4 isolate was obtained and its comparison showed a closer degree of relatedness to isolate 99–1 from the USA (99%). To our knowledge, this is the first report of CMV isolates from subgroup IB in Egypt and the first full length sequencing of an ToMV Egyptian isolate.",
"title": ""
},
{
"docid": "0ce9e025b0728adc245759580330e7f5",
"text": "We present a unified framework for dense correspondence estimation, called Homography flow, to handle large photometric and geometric deformations in an efficient manner. Our algorithm is inspired by recent successes of the sparse to dense framework. The main intuition is that dense flows located in same plane can be represented as a single geometric transform. Tailored to dense correspondence task, the Homography flow differs from previous methods in the flow domain clustering and the trilateral interpolation. By estimating and propagating sparsely estimated transforms, dense flow field is estimated with very low computation time. The Homography flow highly improves the performance of dense correspondences, especially in flow discontinuous area. Experimental results on challenging image pairs show that our approach suppresses the state-of-the-art algorithms in both accuracy and computation time.",
"title": ""
},
{
"docid": "d94a4f07939c0f420787b099336f426b",
"text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.",
"title": ""
},
{
"docid": "13cfc33bd8611b3baaa9be37ea9d627e",
"text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.",
"title": ""
},
{
"docid": "03625364ccde0155f2c061b47e3a00b8",
"text": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.’s system (Pantel et al., 2007).",
"title": ""
},
{
"docid": "779fba8ff7f59d3571cfe4c1803671e3",
"text": "This paper describes the design of an indirect current feedback Instrumentation Amplifier (IA). Transistor sizing plays a major role in achieving the desired gain, the Common Mode Rejection Ratio (CMRR) and the bandwidth of the Instrumentation Amplifier. A gm/ID based design methodology is employed to design the functional blocks of the IA. It links the design variables of each functional block to its target specifications and is used to develop design charts that are used to accurately size the transistors. The IA thus designed achieves a voltage gain of 31dB with a bandwidth 1.2MHz and a CMRR of 87dB at 1MHz. The circuit design is carried out using 0.18μm CMOS process.",
"title": ""
},
{
"docid": "b1a508ecaa6fef0583b430fc0074af74",
"text": "Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.",
"title": ""
},
{
"docid": "755820a345dea56c4631ee14467e2e41",
"text": "This paper presents a novel six-axis force/torque (F/T) sensor for robotic applications that is self-contained, rugged, and inexpensive. Six capacitive sensor cells are adopted to detect three normal and three shear forces. Six sensor cell readings are converted to F/T information via calibrations and transformation. To simplify the manufacturing processes, a sensor design with parallel and orthogonal arrangements of sensing cells is proposed, which achieves the large improvement of the sensitivity. Also, the signal processing is realized with a single printed circuit board and a ground plate, and thus, we make it possible to build a lightweight six-axis F/T sensor with simple manufacturing processes at extremely low cost. The sensor is manufactured and its performances are validated by comparing them with a commercial six-axis F/T sensor.",
"title": ""
},
{
"docid": "a07338beeb3246954815e0389c59ae29",
"text": "We have proposed gate-all-around Silicon nanowire MOSFET (SNWFET) on bulk Si as an ultimate transistor. Well controlled processes are used to achieve gate length (LG) of sub-10nm and narrow nanowire widths. Excellent performance with reasonable VTH and short channel immunity are achieved owing to thin nanowire channel, self-aligned gate, and GAA structure. Transistor performance with gate length of 10nm has been demonstrated and nanowire size (DNW) dependency of various electrical characteristics has been investigated. Random telegraph noise (RTN) in SNWFET is studied as well.",
"title": ""
},
{
"docid": "17f0fbd3ab3b773b5ef9d636700b5af6",
"text": "Motor sequence learning is a process whereby a series of elementary movements is re-coded into an efficient representation for the entire sequence. Here we show that human subjects learn a visuomotor sequence by spontaneously chunking the elementary movements, while each chunk acts as a single memory unit. The subjects learned to press a sequence of 10 sets of two buttons through trial and error. By examining the temporal patterns with which subjects performed a visuomotor sequence, we found that the subjects performed the 10 sets as several clusters of sets, which were separated by long time gaps. While the overall performance time decreased by repeating the same sequence, the clusters became clearer and more consistent. The cluster pattern was uncorrelated with the distance of hand movements and was different across subjects who learned the same sequence. We then split a learned sequence into three segments, while preserving or destroying the clusters in the learned sequence, and shuffled the segments. The performance on the shuffled sequence was more accurate and quicker when the clusters in the original sequence were preserved than when they were destroyed. The results suggest that each cluster is processed as a single memory unit, a chunk, and is necessary for efficient sequence processing. A learned visuomotor sequence is hierarchically represented as chunks that contain several elementary movements. We also found that the temporal patterns of sequence performance transferred from the nondominant to dominant hand, but not vice versa. This may suggest a role of the dominant hemisphere in storage of learned chunks. Together with our previous unit-recording and imaging studies that used the same learning paradigm, we predict specific roles of the dominant parietal area, basal ganglia, and presupplementary motor area in the chunking.",
"title": ""
},
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
},
{
"docid": "84e71d32b1f40eb59d63a0ec6324d79b",
"text": "Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.",
"title": ""
},
{
"docid": "37d353f5b8f0034209f75a3848580642",
"text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.",
"title": ""
},
{
"docid": "4c9313e27c290ccc41f3874108593bf6",
"text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.",
"title": ""
},
{
"docid": "e2bdc37afbe20e8281aaae302ed4cd7e",
"text": "Some obtained results related to an ongoing project which aims at providing a comprehensive approach for implementation of Internet of Things concept into the military domain are presented. A comprehensive approach to fault diagnosis within the Internet of Military Things was outlined. Particularly a method of fault detection which is based on a network partitioning into clusters was proposed. Also, some solutions proposed for the experimentally constructed network called EFTSN was conducted.",
"title": ""
},
{
"docid": "112931102c7c68e6e1e056f18593dbbc",
"text": "Graphical passwords were proposed as an alternative to overcome the inherent limitations of text-based passwords, inspired by research that shows that the graphical memory of humans is particularly well developed. A graphical password scheme that has been widely adopted is the Android Unlock Pattern, a special case of the Pass-Go scheme with grid size restricted to 3x3 points and restricted stroke count.\n In this paper, we study the security of Android unlock patterns. By performing a large-scale user study, we measure actual user choices of patterns instead of theoretical considerations on password spaces. From this data we construct a model based on Markov chains that enables us to quantify the strength of Android unlock patterns. We found empirically that there is a high bias in the pattern selection process, e.g., the upper left corner and three-point long straight lines are very typical selection strategies. Consequently, the entropy of patterns is rather low, and our results indicate that the security offered by the scheme is less than the security of only three digit randomly-assigned PINs for guessing 20% of all passwords (i.e., we estimate a partial guessing entropy G_0.2 of 9.10 bit).\n Based on these insights, we systematically improve the scheme by finding a small, but still effective change in the pattern layout that makes graphical user logins substantially more secure. By means of another user study, we show that some changes improve the security by more than doubling the space of actually used passwords (i.e., increasing the partial guessing entropy G_0.2 to 10.81 bit).",
"title": ""
},
{
"docid": "ef3598b448179b7a788444193bc77d62",
"text": "The human visual system has the remarkably ability to be able to effortlessly learn novel concepts from only a few examples. Mimicking the same behavior on machine learning vision systems is an interesting and very challenging research problem with many practical advantages on real world vision applications. In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories). To achieve that goal we propose (a) to extend an object recognition system with an attention based few-shot classification weight generator, and (b) to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. The latter, apart from unifying the recognition of both novel and base categories, it also leads to feature representations that generalize better on \"unseen\" categories. We extensively evaluate our approach on Mini-ImageNet where we manage to improve the prior state-of-the-art on few-shot recognition (i.e., we achieve 56.20% and 73.00% on the 1-shot and 5-shot settings respectively) while at the same time we do not sacrifice any accuracy on the base categories, which is a characteristic that most prior approaches lack. Finally, we apply our approach on the recently introduced few-shot benchmark of Bharath and Girshick [4] where we also achieve state-of-the-art results.",
"title": ""
}
] |
scidocsrr
|
f8ff4af53146346ade9faab31db52040
|
A comparative study of control techniques for three phase PWM rectifier
|
[
{
"docid": "714641a148e9a5f02bb13d5485203d70",
"text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.",
"title": ""
}
] |
[
{
"docid": "08affba6a0b34574e9532bb75b79c74f",
"text": "In general, the position control of electro-hydraulic actuator (EHA) systems is difficult because of system uncertainties such as Coulomb friction, viscous friction, and pump leakage coefficient. Even if the exact values of the friction and pump leakage coefficient may be obtained through experiment, the identification procedure is very complicated and requires much effort. In addition, the identified values may not guarantee the reliability of systems because of the variation of the operating condition. Therefore, in this paper, an adaptive backstepping control (ABSC) scheme is proposed to overcome the problem of system uncertainties effectively and to improve the tracking performance of EHA systems. In order to implement the proposed control scheme, the system uncertainties in EHA systems are considered as only one term. In addition, in order to obtain the virtual controls for stabilizing the closed-loop system, the update rule for the system uncertainty term is induced by the Lyapunov control function (LCF). To verify the performance and robustness of the proposed control system, computer simulation of the proposed control system is executed first and the proposed control scheme is implemented for an EHA system by experiment. From the computer simulation and experimental results, it was found that the ABSC system produces the desired tracking performance and has robustness to the system uncertainties of EHA systems.",
"title": ""
},
{
"docid": "e9e11d96e26708c380362847094113db",
"text": "Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed.",
"title": ""
},
{
"docid": "97dfc67c63e7e162dd06d5cb2959912a",
"text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.",
"title": ""
},
{
"docid": "d12d475dc72f695d3aecfb016229da19",
"text": "Following the increasing popularity of the mobile ecosystem, cybercriminals have increasingly targeted mobile ecosystems, designing and distributing malicious apps that steal information or cause harm to the device's owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach.To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls' sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and usergenerated inputs. We find that combining both static and dynamic analysis yields the best performance, with $F -$measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and investigate the reasons for inconsistent misclassifications across methods.",
"title": ""
},
{
"docid": "c906d026937ebea3525f5dee5d923335",
"text": "VGGNets have turned out to be effective for object recognition in still images. However, it is unable to yield good performance by directly adapting the VGGNet models trained on the ImageNet dataset for scene recognition. This report describes our implementation of training the VGGNets on the large-scale Places205 dataset. Specifically, we train three VGGNet models, namely VGGNet-11, VGGNet-13, and VGGNet-16, by using a Multi-GPU extension of Caffe toolbox with high computational efficiency. We verify the performance of trained Places205-VGGNet models on three datasets: MIT67, SUN397, and Places205. Our trained models achieve the state-of-the-art performance o n these datasets and are made public available 1.",
"title": ""
},
{
"docid": "71c94681f64ad6b697a9370691db9e9e",
"text": "The construction of a depression rating scale designed to be particularly sensitive to treatment effects is described. Ratings of 54 English and 52 Swedish patients on a 65 item comprehensive psychopathology scale were used to identify the 17 most commonly occurring symptoms in primary depressive illness in the combined sample. Ratings on these 17 items for 64 patients participating in studies of four different antidepressant drugs were used to create a depression scale consisting of the 10 items which showed the largest changes with treatment and the highest correlation to overall change. The inner-rater reliability of the new depression scale was high. Scores on the scale correlated significantly with scores on a standard rating scale for depression, the Hamilton Rating Scale (HRS), indicating its validity as a general severity estimate. Its capacity to differentiate between responders and non-responders to antidepressant treatment was better than the HRS, indicating greater sensitivity to change. The practical and ethical implications in terms of smaller sample sizes in clinical trials are discussed.",
"title": ""
},
{
"docid": "d6039a3f998b33c08b07696dfb1c2ca9",
"text": "In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. According to the experimental results, we verity system performance in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Moreover, to deal with the accident immediately, the system provides local station, central control room and train with the video information and alarm message.",
"title": ""
},
{
"docid": "dc6fe019c28ed63f435f295534f944a1",
"text": "Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. Already in the pioneering days of computational models of neural cognition, the question was raised how symbolic knowledge can be represented and dealt with within neural networks. The landmark paper [McCulloch and Pitts, 1943] provides fundamental insights how propositional logic can be processed using simple artificial neural networks. Within the following decades, however, the topic did not receive much attention as research in artificial intelligence initially focused on purely symbolic approaches. The power of machine learning using artificial neural networking was not recognized until the 80s, when in particular the backpropagation algorithm [Rumelhart et al., 1986] made connectionist learning feasible and applicable in practice. These advances indicated a breakthrough in machine learning which quickly led to industrial-strength applications in areas such as image analysis, speech and pattern recognition, investment analysis, engine monitoring, fault diagnosis, etc. During a training process from raw data, artificial neural networks acquire expert knowledge about the problem domain, and the ability to generalize this knowledge to similar but previously unencountered situations in a way which often surpasses the abilities of human experts. The knowledge obtained during the training process, however, is hidden within",
"title": ""
},
{
"docid": "6286480f676c75e1cac4af9329227258",
"text": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way— bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.",
"title": ""
},
{
"docid": "0e2fdb9fc054e47a3f0b817f68de68b1",
"text": "Recent regulatory guidance suggests that drug metabolites identified in human plasma should be present at equal or greater levels in at least one of the animal species used in safety assessments (MIST). Often synthetic standards for the metabolites do not exist, thus this has introduced multiple challenges regarding the quantitative comparison of metabolites between human and animals. Various bioanalytical approaches are described to evaluate the exposure of metabolites in animal vs. human. A simple LC/MS/MS peak area ratio comparison approach is the most facile and applicable approach to make a first assessment of whether metabolite exposures in animals exceed that in humans. In most cases, this measurement is sufficient to demonstrate that an animal toxicology study of the parent drug has covered the safety of the human metabolites. Methods whereby quantitation of metabolites can be done in the absence of chemically synthesized authentic standards are also described. Only in rare cases, where an actual exposure measurement of a metabolite is needed, will a validated or qualified method requiring a synthetic standard be needed. The rigor of the bioanalysis is increased accordingly based on the results of animal:human ratio measurements. This data driven bioanalysis strategy to address MIST issues within standard drug development processes is described.",
"title": ""
},
{
"docid": "035696f6f2e79cb226c6bc45991cbb5a",
"text": "The vast amount of research over the past decades has significantly added to our knowledge of phantom limb pain. Multiple factors including site of amputation or presence of preamputation pain have been found to have a positive correlation with the development of phantom limb pain. The paradigms of proposed mechanisms have shifted over the past years from the psychogenic theory to peripheral and central neural changes involving cortical reorganization. More recently, the role of mirror neurons in the brain has been proposed in the generation of phantom pain. A wide variety of treatment approaches have been employed, but mechanism-based specific treatment guidelines are yet to evolve. Phantom limb pain is considered a neuropathic pain, and most treatment recommendations are based on recommendations for neuropathic pain syndromes. Mirror therapy, a relatively recently proposed therapy for phantom limb pain, has mixed results in randomized controlled trials. Most successful treatment outcomes include multidisciplinary measures. This paper attempts to review and summarize recent research relative to the proposed mechanisms of and treatments for phantom limb pain.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "2e8e9401e76bfdb2b121fbc7da29b2c1",
"text": "BACKGROUND\nMagnetic resonance (MR) imaging has established its usefulness in diagnosing hamstring muscle strain and identifying features correlating with the duration of rehabilitation in athletes; however, data are currently lacking that may predict which imaging parameters may be predictive of a repeat strain.\n\n\nPURPOSE\nThis study was conducted to identify whether any MR imaging-identifiable parameters are predictive of athletes at risk of sustaining a recurrent hamstring strain in the same playing season.\n\n\nSTUDY DESIGN\nCohort study; Level of evidence, 3.\n\n\nMETHODS\nForty-one players of the Australian Football League who sustained a hamstring injury underwent MR examination within 3 days of injury between February and August 2002. The imaging parameters measured were the length of injury, cross-sectional area, the specific muscle involved, and the location of the injury within the muscle-tendon unit. Players who suffered a repeat injury during the same season were reimaged, and baseline and repeat injury measurements were compared. Comparison was also made between this group and those who sustained a single strain.\n\n\nRESULTS\nForty-one players sustained hamstring strains that were positive on MR imaging, with 31 injured once and 10 suffering a second injury. The mean length of hamstring muscle injury for the isolated group was 83.4 mm, compared with 98.7 mm for the reinjury group (P = .35). In the reinjury group, the second strain was also of greater length than the original (mean, 107.5 mm; P = .07). Ninety percent of players sustaining a repeat injury demonstrated an injury length greater than 60 mm, compared with only 58% in the single strain group (P = .01). Only 7% of players (1 of 14) with a strain <60 mm suffered a repeat injury. Of the 27 players sustaining a hamstring strain >60 mm, 33% (9 of 27) suffered a repeat injury. Of all the parameters assessed, only a history of anterior cruciate ligament sprain was a statistically significant predictor for suffering a second strain during the same season of competition.\n\n\nCONCLUSION\nA history of anterior cruciate ligament injury was the only statistically significant risk factor for a recurrent hamstring strain in our study. Of the imaging parameters, the MR length of a strain had the strongest correlation association with a repeat hamstring strain and therefore may assist in identifying which athletes are more likely to suffer further reinjury.",
"title": ""
},
{
"docid": "f9e273248ed6e73766f1fc5ba1ecdfda",
"text": "Rapid, vertically climbing cockroaches produced climbing dynamics similar to geckos, despite differences in attachment mechanism, ;foot or toe' morphology and leg number. Given the common pattern in such diverse species, we propose the first template for the dynamics of rapid, legged climbing analogous to the spring-loaded, inverted pendulum used to characterize level running in a diversity of pedestrians. We measured single leg wall reaction forces and center of mass dynamics in death-head cockroaches Blaberus discoidalis, as they ascended a three-axis force plate oriented vertically and coated with glass beads to aid attachment. Cockroaches used an alternating tripod gait during climbs at 19.5+/-4.2 cm s(-1), approximately 5 body lengths s(-1). Single-leg force patterns differed significantly from level running. During vertical climbing, all legs generated forces to pull the animal up the plate. Front and middle legs pulled laterally toward the midline. Front legs pulled the head toward the wall, while hind legs pushed the abdomen away. These single-leg force patterns summed to generate dynamics of the whole animal in the frontal plane such that the center of mass cyclically accelerated up the wall in synchrony with cyclical side-to-side motion that resulted from alternating net lateral pulling forces. The general force patterns used by cockroaches and geckos have provided biological inspiration for the design of a climbing robot named RiSE (Robots in Scansorial Environments).",
"title": ""
},
{
"docid": "2f9e5a34137fe7871c9388078c57dc8e",
"text": "This paper presents a new model of measuring semantic similarity in the taxonomy of WordNet. The model takes the path length between two concepts and IC value of each concept as its metric, furthermore, the weight of two metrics can be adapted artificially. In order to evaluate our model, traditional and widely used datasets are used. Firstly, coefficients of correlation between human ratings of similarity and six computational models are calculated, the result shows our new model outperforms their homologues. Then, the distribution graphs of similarity value of 65 word pairs are discussed our model having no faulted zone more centralized than other five methods. So our model can make up the insufficient of other methods which only using one metric(path length or IC value) in their model.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "7afe4444a805f1994a40f98e01908509",
"text": "It is well known that CMOS scaling trends are now accompanied by less desirable byproducts such as increased energy dissipation. To combat the aforementioned challenges, solutions are sought at both the device and architectural levels. With this context, this work focuses on embedding a low voltage device, a Tunneling Field Effect Transistor (TFET) within a Cellular Neural Network (CNN) -- a low power analog computing architecture. Our study shows that TFET-based CNN systems, aside from being fully functional, also provide significant power savings when compared to the conventional resistor-based CNN. Our initial studies suggest that power savings are possible by carefully engineering lower voltage, lower current TFET devices without sacrificing performance. Moreover, TFET-based CNN reduces implementation footprints by eliminating the hardware required to realize output transfer functions. Application dynamics are verified through simulations. We conclude the paper with a discussion of desired device characteristics for CNN architectures with enhanced functionality.",
"title": ""
},
{
"docid": "f90e6d3084733994935fcbee64286aec",
"text": "To find the position of an acoustic source in a room, typically, a set of relative delays among different microphone pairs needs to be determined. The generalized cross-correlation (GCC) method is the most popular to do so and is well explained in a landmark paper by Knapp and Carter. In this paper, the idea of cross-correlation coefficient between two random signals is generalized to the multichannel case by using the notion of spatial prediction. The multichannel spatial correlation matrix is then deduced and its properties are discussed. We then propose a new method based on the multichannel spatial correlation matrix for time delay estimation. It is shown that this new approach can take advantage of the redundancy when more than two microphones are available and this redundancy can help the estimator to better cope with noise and reverberation.",
"title": ""
},
{
"docid": "437457e673df18fc69d57c2c16a992fc",
"text": "Human-associated microbial communities vary across individuals: possible contributing factors include (genetic) relatedness, diet, and age. However, our surroundings, including individuals with whom we interact, also likely shape our microbial communities. To quantify this microbial exchange, we surveyed fecal, oral, and skin microbiota from 60 families (spousal units with children, dogs, both, or neither). Household members, particularly couples, shared more of their microbiota than individuals from different households, with stronger effects of co-habitation on skin than oral or fecal microbiota. Dog ownership significantly increased the shared skin microbiota in cohabiting adults, and dog-owning adults shared more 'skin' microbiota with their own dogs than with other dogs. Although the degree to which these shared microbes have a true niche on the human body, vs transient detection after direct contact, is unknown, these results suggest that direct and frequent contact with our cohabitants may significantly shape the composition of our microbial communities. DOI:http://dx.doi.org/10.7554/eLife.00458.001.",
"title": ""
},
{
"docid": "d7582552589626891258f52b0d750915",
"text": "Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks.",
"title": ""
}
] |
scidocsrr
|
94e2c515da44e97d8b7db8821ebcb2e4
|
Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions.
|
[
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "6a4437fa8a5a764d99ed5471401f5ce4",
"text": "There is disagreement in the literature about the exact nature of the phenomenon of empathy. There are emotional, cognitive, and conditioning views, applying in varying degrees across species. An adequate description of the ultimate and proximate mechanism can integrate these views. Proximately, the perception of an object's state activates the subject's corresponding representations, which in turn activate somatic and autonomic responses. This mechanism supports basic behaviors (e.g., alarm, social facilitation, vicariousness of emotions, mother-infant responsiveness, and the modeling of competitors and predators) that are crucial for the reproductive success of animals living in groups. The Perception-Action Model (PAM), together with an understanding of how representations change with experience, can explain the major empirical effects in the literature (similarity, familiarity, past experience, explicit teaching, and salience). It can also predict a variety of empathy disorders. The interaction between the PAM and prefrontal functioning can also explain different levels of empathy across species and age groups. This view can advance our evolutionary understanding of empathy beyond inclusive fitness and reciprocal altruism and can explain different levels of empathy across individuals, species, stages of development, and situations.",
"title": ""
}
] |
[
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "6a6238bb56eacc7d8ecc8f15f753b745",
"text": "Privacy-preservation has emerged to be a major concern in devising a data mining system. But, protecting the privacy of data mining input does not guarantee a privacy-preserved output. This paper focuses on preserving the privacy of data mining output and particularly the output of classification task. Further, instead of static datasets, we consider the classification of continuously arriving data streams: a rapidly growing research area. Due to the challenges of data stream classification such as vast volume, a mixture of labeled and unlabeled instances throughout the stream and timely classifier publication, enforcing privacy-preservation techniques becomes even more challenging. In order to achieve this goal, we propose a systematic method for preserving output-privacy in data stream classification that addresses several applications like loan approval, credit card fraud detection, disease outbreak or biological attack detection. Specifically, we propose an algorithm named Diverse and k-Anonymized HOeffding Tree (DAHOT) that is an amalgamation of popular data stream classification algorithm Hoeffding tree and a variant of k-anonymity and l-diversity principles. The empirical results on real and synthetic data streams verify the effectiveness of DAHOT as compared to its bedrock Hoeffding tree and two other techniques, one that learns sanitized decision trees from sampled data stream and other technique that uses ensemble-based classification. DAHOT guarantees to preserve the private patterns while classifying the data streams accurately.",
"title": ""
},
{
"docid": "d7aac1208aa2ef63ed9a4ef5b67d8017",
"text": "We contrast two theoretical approaches to social influence, one stressing interpersonal dependence, conceptualized as normative and informational influence (Deutsch & Gerard, 1955), and the other stressing group membership, conceptualized as self-categorization and referent informational influence (Turner, Hogg, Oakes, Reicher & Wetherell, 1987). We argue that both social comparisons to reduce uncertainty and the existence of normative pressure to comply depend on perceiving the source of influence as belonging to one's own category. This study tested these two approaches using three influence paradigms. First we demonstrate that, in Sherif's (1936) autokinetic effect paradigm, the impact of confederates on the formation of a norm decreases as their membership of a different category is made more salient to subjects. Second, in the Asch (1956) conformity paradigm, surveillance effectively exerts normative pressure if done by an in-group but not by an out-group. In-group influence decreases and out-group influence increases when subjects respond privately. Self-report data indicate that in-group confederates create more subjective uncertainty than out-group confederates and public responding seems to increase cohesiveness with in-group - but decrease it with out-group - sources of influence. In our third experiment we use the group polarization paradigm (e.g. Burnstein & Vinokur, 1973) to demonstrate that, when categorical differences between two subgroups within a discussion group are made salient, convergence of opinion between the subgroups is inhibited. Taken together the experiments show that self-categorization can be a crucial determining factor in social influence.",
"title": ""
},
{
"docid": "efae02feebc4a2efe2cf98ab4d19cd34",
"text": "User behavior on the Web changes over time. For example, the queries that people issue to search engines, and the underlying informational goals behind the queries vary over time. In this paper, we examine how to model and predict this temporal user behavior. We develop a temporal modeling framework adapted from physics and signal processing that can be used to predict time-varying user behavior using smoothing and trends. We also explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. We develop a learning procedure that can be used to construct models of users' activities based on features of current and historical behaviors. The results of experiments indicate that by using our framework to predict user behavior, we can achieve significant improvements in prediction compared to baseline models that weight historical evidence the same for all queries. We also develop a novel learning algorithm that explicitly learns when to apply a given prediction model among a set of such models. Our improved temporal modeling of user behavior can be used to enhance query suggestions, crawling policies, and result ranking.",
"title": ""
},
{
"docid": "9cdc7b6b382ce24362274b75da727183",
"text": "Collaborative spectrum sensing is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and then exclude the attacker's report for spectrum sensing. Many existing attacker-detection schemes are based on the knowledge of the attacker's strategy and thus apply the Bayesian attacker detection. However, in practical cognitive radio systems the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality-detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case in which the attacker does not know the reports of honest secondary users (called independent attack), it is shown that the attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case in which the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss-detection and false-alarm probabilities. This motivates cognitive radio networks to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.",
"title": ""
},
{
"docid": "6e8d1b5c2183ce09aadb09e4ff215241",
"text": "The widely used ChestX-ray14 dataset addresses an important medical image classification problem and has the following caveats: 1) many lung pathologies are visually similar, 2) a variant of diseases including lung cancer, tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels and 3) The incidence of healthy images is much larger than diseased samples, creating imbalanced data. These properties are common in medical domain. Existing literature uses stateof-the-art DensetNet/Resnet models being transfer learned where output neurons of the networks are trained for individual diseases to cater for multiple diseases labels in each image. However, most of them don’t consider relationship between multiple classes. In this work we have proposed a novel error function, Multi-label Softmax Loss (MSML), to specifically address the properties of multiple labels and imbalanced data. Moreover, we have designed deep network architecture based on fine-grained classification concept that incorporates MSML. We have evaluated our proposed method on various network backbones and showed consistent performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The proposed error function provides a new method to gain improved performance across wider medical datasets.",
"title": ""
},
{
"docid": "0fd48f6f0f5ef1e68c2a157c16713e86",
"text": "Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.",
"title": ""
},
{
"docid": "37dbfc84d3b04b990d8b3b31d2013f77",
"text": "Large projects such as kernels, drivers and libraries follow a code style, and have recurring patterns. In this project, we explore learning based code recommendation, to use the project context and give meaningful suggestions. Using word vectors to model code tokens, and neural network based learning techniques, we are able to capture interesting patterns, and predict code that that cannot be predicted by a simple grammar and syntax based approach as in conventional IDEs. We achieve a total prediction accuracy of 56.0% on Linux kernel, a C project, and 40.6% on Twisted, a Python networking library.",
"title": ""
},
{
"docid": "eb7ccd69c0bbb4e421b8db3b265f5ba6",
"text": "The discovery of Novoselov et al. (2004) of a simple method to transfer a single atomic layer of carbon from the c-face of graphite to a substrate suitable for the measurement of its electrical and optical properties has led to a renewed interest in what was considered to be before that time a prototypical, yet theoretical, two-dimensional system. Indeed, recent theoretical studies of graphene reveal that the linear electronic band dispersion near the Brillouin zone corners gives rise to electrons and holes that propagate as if they were massless fermions and anomalous quantum transport was experimentally observed. Recent calculations and experimental determination of the optical phonons of graphene reveal Kohn anomalies at high-symmetry points in the Brillouin zone. They also show that the Born– Oppenheimer principle breaks down for doped graphene. Since a carbon nanotube can be viewed as a rolled-up sheet of graphene, these recent theoretical and experimental results on graphene should be important to researchers working on carbon nanotubes. The goal of this contribution is to review the exciting news about the electronic and phonon states of graphene and to suggest how these discoveries help understand the properties of carbon nanotubes.",
"title": ""
},
{
"docid": "f7e14c5e8a54e01c3b8f64e08f30a500",
"text": "As a subsystem of an Intelligent Transportation System (ITS), an Advanced Traveller Information System (ATIS) disseminates real-time traffic information to travellers. This paper analyses traffic flows data, describes methodology of traffic flows data processing and visualization in digital ArcGIS online maps. Calculation based on real time traffic data from equipped traffic sensors in Vilnius city streets. The paper also discusses about traffic conditions and impacts for Vilnius streets network from the point of traffic flows view. Furthermore, a comprehensive traffic flow GIS modelling procedure is presented, which relates traffic flows data from sensors to street network segments and updates traffic flow data to GIS database. GIS maps examples and traffic flows analysis possibilities in this paper presented as well.",
"title": ""
},
{
"docid": "a1bb09726327d73cf73c1aa9b0a2c39d",
"text": "Advances in neural network language models have demonstrated that these models can effectively learn representations of words meaning. In this paper, we explore a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free-text, rather than directly from terms in free-text.\n This model is employed for the task of measuring semantic similarity between medical concepts, a task that is central to a number of techniques in medical informatics and information retrieval. The model is built with two medical corpora (journal abstracts and patient records) and empirically validated on two ground-truth datasets of human-judged concept pairs assessed by medical professionals. Empirically, our approach correlates closely with expert human assessors (≈0.9) and outperforms a number of state-of-the-art benchmarks for medical semantic similarity.\n The demonstrated superiority of this model for providing an effective semantic similarity measure is promising in that this may translate into effectiveness gains for techniques in medical information retrieval and medical informatics (e.g., query expansion and literature-based discovery).",
"title": ""
},
{
"docid": "c1978e4936ed5bda4e51863dea7e93ee",
"text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.",
"title": ""
},
{
"docid": "0245101fac73b247fb2750413aad3915",
"text": "State evaluation and opponent modelling are important areas to consider when designing game-playing Artificial Intelligence. This paper presents a model for predicting which player will win in the real-time strategy game StarCraft. Model weights are learned from replays using logistic regression. We also present some metrics for estimating player skill which can be used a features in the predictive model, including using a battle simulation as a baseline to compare player performance against.",
"title": ""
},
{
"docid": "ba39b85859548caa2d3f1d51a7763482",
"text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.",
"title": ""
},
{
"docid": "dca65464cc8a3bb59f2544ef9a09e388",
"text": "Some authors clearly showed that faking reduces the construct validity of personality questionnaires, whilst many others found no such effect. A possible explanation for mixed results could be searched for in a variety of methodological strategies in forming comparison groups supposed to differ in the level of faking: candidates vs. non-candidates; groups of individuals with \"high\" vs. \"low\" social desirability score; and groups given instructions to respond honestly vs. instructions to \"fake good\". All three strategies may be criticized for addressing the faking problem indirectly – assuming that comparison groups really differ in the level of response distortion, which might not be true. Therefore, in a within-subject design study we examined how faking affects the construct validity of personality inventories using a direct measure of faking. The results suggest that faking reduces the construct validity of personality questionnaires gradually – the effect was stronger in the subsample of participants who distorted their responses to a greater extent.",
"title": ""
},
{
"docid": "4cf669d93a62c480f4f6795f47744bc8",
"text": "We present an estimate of an upper bound of 1.75 bits for the entropy of characters in printed English, obtained by constructing a word trigram model and then computing the cross-entropy between this model and a balanced sample of English text. We suggest the well-known and widely available Brown Corpus of printed English as a standard against which to measure progress in language modeling and offer our bound as the first of what we hope will be a series of steadily decreasing bounds.",
"title": ""
},
{
"docid": "b70d795f7f1bdbc18be034e1d3f20f8e",
"text": "Technical universities, especially in Europe, are facing an important challenge in attracting more diverse groups of students, and in keeping the students they attract motivated and engaged in the curriculum. We describe our experience with gamification, which we loosely define as a teaching technique that uses social gaming elements to deliver higher education. Over the past three years, we have applied gamification to undergraduate and graduate courses in a leading technical university in the Netherlands and in Europe. Ours is one of the first long-running attempts to show that gamification can be used to teach technically challenging courses. The two gamification-based courses, the first-year B.Sc. course Computer Organization and an M.Sc.-level course on the emerging technology of Cloud Computing, have been cumulatively followed by over 450 students and passed by over 75% of them, at the first attempt. We find that gamification is correlated with an increase in the percentage of passing students, and in the participation in voluntary activities and challenging assignments. Gamification seems to also foster interaction in the classroom and trigger students to pay more attention to the design of the course. We also observe very positive student assessments and volunteered testimonials, and a Teacher of the Year award.",
"title": ""
},
{
"docid": "4040c04a9a3cfebe850229cc78233f8c",
"text": "Utility computing delivers compute and storage resources to applications as an 'on-demand utility', much like electricity, from a distributed collection of computing resources. There is great interest in running database applications on utility resources (e.g., Oracle's Grid initiative) due to reduced infrastructure and management costs, higher resource utilization, and the ability to handle sudden load surges. Virtual Machine (VM) technology offers powerful mechanisms to manage a utility resource infrastructure. However, provisioning VMs for applications to meet system performance goals, e.g., to meet service level agreements (SLAs), is an open problem. We are building two systems at Duke - Shirako and NIMO - that collectively address this problem.\n Shirako is a toolkit for leasing VMs to an application from a utility resource infrastructure. NIMO learns application performance models using novel techniques based on active learning, and uses these models to guide VM provisioning in Shirako. We will demonstrate: (a) how NIMO learns performance models in an online and automatic fashion using active learning; and (b) how NIMO uses these models to do automated and on-demand provisioning of VMs in Shirako for two classes of database applications - multi-tier web services and computational science workflows.",
"title": ""
},
{
"docid": "7809fdedaf075955523b51b429638501",
"text": "PM10 prediction has attracted special legislative and scientific attention due to its harmful effects on human health. Statistical techniques have the potential for high-accuracy PM10 prediction and accordingly, previous studies on statistical methods for temporal, spatial and spatio-temporal prediction of PM10 are reviewed and discussed in this paper. A review of previous studies demonstrates that Support Vector Machines, Artificial Neural Networks and hybrid techniques show promise for suitable temporal PM10 prediction. A review of the spatial predictions of PM10 shows that the LUR (Land Use Regression) approach has been successfully utilized for spatial prediction of PM10 in urban areas. Of the six introduced approaches for spatio-temporal prediction of PM10, only one approach is suitable for high-resolved prediction (Spatial resolution < 100 m; Temporal resolution ď 24 h). In this approach, based upon the LUR modeling method, short-term dynamic input variables are employed as explanatory variables alongside typical non-dynamic input variables in a non-linear modeling procedure.",
"title": ""
}
] |
scidocsrr
|
bfc22d978100eb5b81880d8850ca33a6
|
An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology.
|
[
{
"docid": "4287db8deb3c4de5d7f2f5695c3e2e70",
"text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.",
"title": ""
}
] |
[
{
"docid": "bfcb1fd882a328daab503a7dd6b6d0a6",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several non-trivial examples.",
"title": ""
},
{
"docid": "c8dae180aae646bf00e202bd24f15f59",
"text": "Massively Multiplayer Online Games (MMOGs) continue to be a popular and lucrative sector of the gaming market. Project Massive was created to assess MMOG players' social experiences both inside and outside of their gaming environments and the impact of these activities on their everyday lives. The focus of Project Massive has been on the persistent player groups or \"guilds\" that form in MMOGs. The survey has been completed online by 1836 players, who reported on their play patterns, commitment to their player organizations, and personality traits like sociability, extraversion and depression. Here we report our cross-sectional findings and describe our future longitudinal work as we track players and their guilds across the evolving landscape of the MMOG product space.",
"title": ""
},
{
"docid": "f613a2ed6f64c469cf1180d1e8fe9e4a",
"text": "We describe an estimation technique which, given a measurement of the depth of a target from a wide-fieldof-view (WFOV) stereo camera pair, produces a minimax risk fixed-size confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixed-size confidenceinterval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical cap ture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular, for the minimax approach, a very close agreement is achieved between theoreticalcapture probability andempiricalcapture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system.",
"title": ""
},
{
"docid": "e69acc779b3bd736c0e5bd6962c8d459",
"text": "The genome-wide transcriptome profiling of cancerous and normal tissue samples can provide insights into the molecular mechanisms of cancer initiation and progression. RNA Sequencing (RNA-Seq) is a revolutionary tool that has been used extensively in cancer research. However, no existing RNA-Seq database provides all of the following features: (i) large-scale and comprehensive data archives and analyses, including coding-transcript profiling, long non-coding RNA (lncRNA) profiling and coexpression networks; (ii) phenotype-oriented data organization and searching and (iii) the visualization of expression profiles, differential expression and regulatory networks. We have constructed the first public database that meets these criteria, the Cancer RNA-Seq Nexus (CRN, http://syslab4.nchu.edu.tw/CRN). CRN has a user-friendly web interface designed to facilitate cancer research and personalized medicine. It is an open resource for intuitive data exploration, providing coding-transcript/lncRNA expression profiles to support researchers generating new hypotheses in cancer research and personalized medicine.",
"title": ""
},
{
"docid": "da1990ef0bb7ca5e184c32f33a0a8799",
"text": "Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem. This is caused by the fact that no direct relationship exists among adjacent pixels on the output feature map. To address this problem, we propose the pixel deconvolutional layer (PixelDCL) to establish direct relationships among adjacent pixels on the up-sampled feature map. Our method is based on a fresh interpretation of the regular deconvolution operation. The resulting PixelDCL can be used to replace any deconvolutional layer in a plug-and-play manner without compromising the fully trainable capabilities of original models. The proposed PixelDCL may result in slight decrease in efficiency, but this can be overcome by an implementation trick. Experimental results on semantic segmentation demonstrate that PixelDCL can consider spatial features such as edges and shapes and yields more accurate segmentation outputs than deconvolutional layers. When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations.",
"title": ""
},
{
"docid": "cd12564b6875ddc972334f45bbf41ab9",
"text": "Purpose – The purpose of this paper is to review the literature on Total Productive Maintenance (TPM) and to present an overview of TPM implementation practices adopted by the manufacturing organizations. It also seeks to highlight appropriate enablers and success factors for eliminating barriers in successful TPM implementation. Design/methodology/approach – The paper systematically categorizes the published literature and then analyzes and reviews it methodically. Findings – The paper reveals the important issues in Total Productive Maintenance ranging from maintenance techniques, framework of TPM, overall equipment effectiveness (OEE), TPM implementation practices, barriers and success factors in TPM implementation, etc. The contributions of strategic TPM programmes towards improving manufacturing competencies of the organizations have also been highlighted here. Practical implications – The literature on classification of Total Productive Maintenance has so far been very limited. The paper reviews a large number of papers in this field and presents the overview of various TPM implementation practices demonstrated by manufacturing organizations globally. It also highlights the approaches suggested by various researchers and practitioners and critically evaluates the reasons behind failure of TPM programmes in the organizations. Further, the enablers and success factors for TPM implementation have also been highlighted for ensuring smooth and effective TPM implementation in the organizations. Originality/value – The paper contains a comprehensive listing of publications on the field in question and their classification according to various attributes. It will be useful to researchers, maintenance professionals and others concerned with maintenance to understand the significance of TPM.",
"title": ""
},
{
"docid": "3d0b50111f6c9168b8a269a7d99d8fbc",
"text": "Detecting lies is crucial in many areas, such as airport security, police investigations, counter-terrorism, etc. One technique to detect lies is through the identification of facial micro-expressions, which are brief, involuntary expressions shown on the face of humans when they are trying to conceal or repress emotions. Manual measurement of micro-expressions is hard labor, time consuming, and inaccurate. This paper presents the Design and Development of a Lie Detection System using Facial Micro-Expressions. It is an automated vision system designed and implemented using LabVIEW. An Embedded Vision System (EVS) is used to capture the subject's interview. Then, a LabVIEW program converts the video into series of frames and processes the frames, each at a time, in four consecutive stages. The first two stages deal with color conversion and filtering. The third stage applies geometric-based dynamic templates on each frame to specify key features of the facial structure. The fourth stage extracts the needed measurements in order to detect facial micro-expressions to determine whether the subject is lying or not. Testing results show that this system can be used for interpreting eight facial expressions: happiness, sadness, joy, anger, fear, surprise, disgust, and contempt, and detecting facial micro-expressions. It extracts accurate output that can be employed in other fields of studies such as psychological assessment. The results indicate high precision that allows future development of applications that respond to spontaneous facial expressions in real time.",
"title": ""
},
{
"docid": "d94a4f07939c0f420787b099336f426b",
"text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.",
"title": ""
},
{
"docid": "92d047856fdf20b41c4f673aae2ced66",
"text": "This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.",
"title": ""
},
{
"docid": "cd863a82161f4b28cc43eeda21e01a65",
"text": "Face aging, which renders aging faces for an input face, has attracted extensive attention in the multimedia research. Recently, several conditional Generative Adversarial Nets (GANs) based methods have achieved great success. They can generate images fitting the real face distributions conditioned on each individual age group. However, these methods fail to capture the transition patterns, e.g., the gradual shape and texture changes between adjacent age groups. In this paper, we propose a novel Contextual Generative Adversarial Nets (C-GANs) to specifically take it into consideration. The C-GANs consists of a conditional transformation network and two discriminative networks. The conditional transformation network imitates the aging procedure with several specially designed residual blocks. The age discriminative network guides the synthesized face to fit the real conditional distribution. The transition pattern discriminative network is novel, aiming to distinguish the real transition patterns with the fake ones. It serves as an extra regularization term for the conditional transformation network, ensuring the generated image pairs to fit the corresponding real transition pattern distribution. Experimental results demonstrate the proposed framework produces appealing results by comparing with the state-of-the-art and ground truth. We also observe performance gain for cross-age face verification.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "2903e8be6b9a3f8dc818a57197ec1bee",
"text": "A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.",
"title": ""
},
{
"docid": "e32c8589a92a92ab8fd876bb760fb98e",
"text": "The importance of the social sciences for medical informatics is increasingly recognized. As ICT requires inter-action with people and thereby inevitably affects them, understanding ICT requires a focus on the interrelation between technology and its social environment. Sociotechnical approaches increase our understanding of how ICT applications are developed, introduced and become a part of social practices. Socio-technical approaches share several starting points: 1) they see health care work as a social, 'real life' phenomenon, which may seem 'messy' at first, but which is guided by a practical rationality that can only be overlooked at a high price (i.e. failed systems). 2) They see technological innovation as a social process, in which organizations are deeply affected. 3) Through in-depth, formative evaluation, they can help improve system design and implementation.",
"title": ""
},
{
"docid": "0ff27e119ec045674b9111bb5a9e5d29",
"text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.",
"title": ""
},
{
"docid": "cff0b5c06b322c887aed9620afeac668",
"text": "In addition to providing substantial performance enhancements, future 5G networks will also change the mobile network ecosystem. Building on the network slicing concept, 5G allows to “slice” the network infrastructure into separate logical networks that may be operated independently and targeted at specific services. This opens the market to new players: the infrastructure provider, which is the owner of the infrastructure, and the tenants, which may acquire a network slice from the infrastructure provider to deliver a specific service to their customers. In this new context, we need new algorithms for the allocation of network resources that consider these new players. In this paper, we address this issue by designing an algorithm for the admission and allocation of network slices requests that (i) maximises the infrastructure provider's revenue and (ii) ensures that the service guarantees provided to tenants are satisfied. Our key contributions include: (i) an analytical model for the admissibility region of a network slicing-capable 5G Network, (ii) the analysis of the system (modelled as a Semi-Markov Decision Process) and the optimisation of the infrastructure provider's revenue, and (iii) the design of an adaptive algorithm (based on Q-learning) that achieves close to optimal performance.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "adad5599122e63cde59322b7ba46461b",
"text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.",
"title": ""
},
{
"docid": "1be35b9562a428a7581541559dc16bd8",
"text": "OBJECTIVE\nTo assess the effect of virtual reality training on an actual laparoscopic operation.\n\n\nDESIGN\nProspective randomised controlled and blinded trial.\n\n\nSETTING\nSeven gynaecological departments in the Zeeland region of Denmark.\n\n\nPARTICIPANTS\n24 first and second year registrars specialising in gynaecology and obstetrics.\n\n\nINTERVENTIONS\nProficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls).\n\n\nMAIN OUTCOME MEASURE\nThe main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes.\n\n\nRESULTS\nThe simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers' inter-rater agreement was 0.79.\n\n\nCONCLUSION\nSkills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT00311792.",
"title": ""
},
{
"docid": "a7accee00559a544a3715acacffdd37d",
"text": "Engagement is complex and multifaceted, but crucial to learning. Computerized learning environments can provide a superior learning experience for students by automatically detecting student engagement (and, thus also disengagement) and adapting to it. This paper describes results from several previous studies that utilized facial features to automatically detect student engagement, and proposes new methods to expand and improve results. Videos of students will be annotated by third-party observers as mind wandering (disengaged) or not mind wandering (engaged). Automatic detectors will also be trained to classify the same videos based on students' facial features, and compared to the machine predictions. These detectors will then be improved by engineering features to capture facial expressions noted by observers and more heavily weighting training instances that were exceptionally-well classified by observers. Finally, implications of previous results and proposed work are discussed.",
"title": ""
},
{
"docid": "c1338abb3ddd4acb1ba7ed7ac0c4452c",
"text": "Defect prediction models that are trained on class imbalanced datasets (i.e., the proportion of defective and clean modules is not equally represented) are highly susceptible to produce inaccurate prediction models. Prior research compares the impact of class rebalancing techniques on the performance of defect prediction models. Prior research efforts arrive at contradictory conclusions due to the use of different choice of datasets, classification techniques, and performance measures. Such contradictory conclusions make it hard to derive practical guidelines for whether class rebalancing techniques should be applied in the context of defect prediction models. In this paper, we investigate the impact of 4 popularly-used class rebalancing techniques on 10 commonly-used performance measures and the interpretation of defect prediction models. We also construct statistical models to better understand in which experimental design settings that class rebalancing techniques are beneficial for defect prediction models. Through a case study of 101 datasets that span across proprietary and open-source systems, we recommend that class rebalancing techniques are necessary when quality assurance teams wish to increase the completeness of identifying software defects (i.e., Recall). However, class rebalancing techniques should be avoided when interpreting defect prediction models. We also find that class rebalancing techniques do not impact the AUC measure. Hence, AUC should be used as a standard measure when comparing defect prediction models.",
"title": ""
}
] |
scidocsrr
|
9e44f01957f05b39a959becfb42b17e9
|
Rainmakers: why bad weather means good productivity.
|
[
{
"docid": "13c6e4fc3a20528383ef7625c9dd2b79",
"text": "Seasonal affective disorder (SAD) is a syndrome characterized by recurrent depressions that occur annually at the same time each year. We describe 29 patients with SAD; most of them had a bipolar affective disorder, especially bipolar II, and their depressions were generally characterized by hypersomnia, overeating, and carbohydrate craving and seemed to respond to changes in climate and latitude. Sleep recordings in nine depressed patients confirmed the presence of hypersomnia and showed increased sleep latency and reduced slow-wave (delta) sleep. Preliminary studies in 11 patients suggest that extending the photoperiod with bright artificial light has an antidepressant effect.",
"title": ""
}
] |
[
{
"docid": "1bdbfe7d11ca567adcce97a853761939",
"text": "Dynamic contrast enhanced MRI (DCE-MRI) is an emerging imaging protocol in locating, identifying and characterizing breast cancer. However, due to image artifacts in MR, pixel intensity alone cannot accurately characterize the tissue properties. We propose a robust method based on the temporal sequence of textural change and wavelet transform for pixel-by-pixel classification. We first segment the breast region using an active contour model. We then compute textural change on pixel blocks. We apply a three-scale discrete wavelet transform on the texture temporal sequence to further extract frequency features. We employ a progressive feature selection scheme and a committee of support vector machines for the classification. We trained the system on ten cases and tested it on eight independent test cases. Receiver-operating characteristics (ROC) analysis shows that the texture temporal sequence (Az: 0.966 and 0.949 in training and test) is much more effective than the intensity sequence (Az: 0.871 and 0.868 in training and test). The wavelet transform further improves the classification performance (Az: 0.989 and 0.984 in training and test).",
"title": ""
},
{
"docid": "345a59aac1e89df5402197cca90ca464",
"text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia",
"title": ""
},
{
"docid": "ffca07962ddcdfa0d016df8020488b5d",
"text": "Differential-drive mobile robots are usually equipped with video-cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the following quantities: the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and, finally, the odometric parameters of the vehicle. In this paper the simultaneous estimation of the above mentioned quantities is achieved by a systematic and effective calibration procedure that does not require any iterative step. The calibration procedure needs only on-board measurements given by the wheels encoders, the camera and a number of properly taken camera snapshots of a set of known landmarks. Numerical simulations and experimental results with a mobile robot Khepera III equipped with a low-cost camera confirm the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "019c27341b9811a7347467490cea6a72",
"text": "For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.",
"title": ""
},
{
"docid": "68b15f0708c256d674f018b667f97bb5",
"text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.",
"title": ""
},
{
"docid": "94160496e0a470dc278f71c67508ae21",
"text": "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.",
"title": ""
},
{
"docid": "f8724f8166eeb48461f9f4ac8fdd87d3",
"text": "The simultaneous use of images from different spectra can be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.",
"title": ""
},
{
"docid": "9b52a659fb6383e92c5968a082b01b71",
"text": "The internet of things (IoT) has a variety of application domains, including smart homes. This paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attacks from the smart home perspective. Further, this paper proposes an intelligent collaborative security management model to minimize security risk. The security challenges of the IoT for a smart home scenario are encountered, and a comprehensive IoT security management for smart homes has been proposed.",
"title": ""
},
{
"docid": "36142a4c0639662fe52dcc3fdf7b1ca4",
"text": "We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.",
"title": ""
},
{
"docid": "bf50151700f0e286ee5aa3a2bd74c249",
"text": "Computer systems that augment the process of finding the right expert for a given problem in an organization or world-wide are becoming feasible more than ever before, thanks to the prevalence of corporate Intranets and the Internet. This paper investigates such systems in two parts. We first explore the expert finding problem in depth, review and analyze existing systems in this domain, and suggest a domain model that can serve as a framework for design and development decisions. Based on our analyses of the problem and solution spaces, we then bring to light the gaps that remain to be addressed. Finally, we present our approach called DEMOIR, which is a modular architecture for expert finding systems that is based on a centralized expertise modeling server while also incorporating decentralized components for expertise information gathering and exploitation.",
"title": ""
},
{
"docid": "ae1f75aa978fd702be9b203487269517",
"text": "This paper presents a system that performs skill extraction from text documents. It outputs a list of professional skills that are relevant to a given input text. We argue that the system can be practical for hiring and management of personnel in an organization. We make use of the texts and the hyperlink graph of Wikipedia, as well as a list of professional skills obtained from the LinkedIn social network. The system is based on first computing similarities between an input document and the texts of Wikipedia pages and then using a biased, hub-avoiding version of the Spreading Activation algorithm on the Wikipedia graph in order to associate the input document with skills.",
"title": ""
},
{
"docid": "aa3be1c132e741d2c945213cfb0d96ad",
"text": "Collaborative filtering (CF) is one of the most successful recommendation approaches. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. However we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then we propose an unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Our approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "2d105fcec4109a6bc290c616938012f3",
"text": "One of the biggest challenges in automated driving is the ability to determine the vehicleâĂŹs location in realtime - a process known as self-localization or ego-localization. An automated driving system must be reliable under harsh conditions and environmental uncertainties (e.g. GPS denial or imprecision), sensor malfunction, road occlusions, poor lighting, and inclement weather. To cope with this myriad of potential problems, systems typically consist of a GPS receiver, in-vehicle sensors (e.g. cameras and LiDAR devices), and 3D High-Definition (3D HD) Maps. In this paper, we review state-of-the-art self-localization techniques, and present a benchmark for the task of image-based vehicle self-localization. Our dataset was collected on 10km of the Warren Freeway in the San Francisco Area under reasonable traffic and weather conditions. As input to the localization process, we provide timestamp-synchronized, consumer-grade monocular video frames (with camera intrinsic parameters), consumer-grade GPS trajectory, and production-grade 3D HD Maps. For evaluation, we provide survey-grade GPS trajectory. The goal of this dataset is to standardize and formalize the challenge of accurate vehicle self-localization and provide a benchmark to develop and evaluate algorithms.",
"title": ""
},
{
"docid": "592431c03450be59f10e56dcabed0ebf",
"text": "Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.",
"title": ""
},
{
"docid": "98f8994f1ad9315f168878ff40c29afc",
"text": "OBJECTIVE\nSuicide remains a major global public health issue for young people. The reach and accessibility of online and social media-based interventions herald a unique opportunity for suicide prevention. To date, the large body of research into suicide prevention has been undertaken atheoretically. This paper provides a rationale and theoretical framework (based on the interpersonal theory of suicide), and draws on our experiences of developing and testing online and social media-based interventions.\n\n\nMETHOD\nThe implementation of three distinct online and social media-based intervention studies, undertaken with young people at risk of suicide, are discussed. We highlight the ways that these interventions can serve to bolster social connectedness in young people, and outline key aspects of intervention implementation and moderation.\n\n\nRESULTS\nInsights regarding the implementation of these studies include careful protocol development mindful of risk and ethical issues, establishment of suitably qualified teams to oversee development and delivery of the intervention, and utilisation of key aspects of human support (i.e., moderation) to encourage longer-term intervention engagement.\n\n\nCONCLUSIONS\nOnline and social media-based interventions provide an opportunity to enhance feelings of connectedness in young people, a key component of the interpersonal theory of suicide. Our experience has shown that such interventions can be feasibly and safely conducted with young people at risk of suicide. Further studies, with controlled designs, are required to demonstrate intervention efficacy.",
"title": ""
},
{
"docid": "36af986f61252f221a8135e80fe6432d",
"text": "This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing our most basic knowledge of the world, particularly for causal structures in physical, biological, psychological or social domains (Atran, 1995; Carey, 1985a; Kelley, 1973; McCloskey, 1983; Murphy & Medin, 1985; Nichols & Stich, 2003). A principal function of intuitive theories in these domains is to support the learning of new causal knowledge: generating and constraining people’s hypotheses about possible causal relations, highlighting variables, actions and observations likely to be informative about those hypotheses, and guiding people’s interpretation of the data they observe (Ahn & Kalish, 2000; Pazzani, 1987; Pazzani, Dyer & Flowers, 1986; Waldmann, 1996). Leading accounts of cognitive development argue for the importance of intuitive theories in children’s mental lives and frame the major transitions of cognitive development as instances of theory change (Carey, 1985a; Gopnik & Meltzoff, 1997; Inagaki & Hatano 2002; Wellman & Gelman, 1992). Here we attempt to lay out some prospects for understanding the structure, function, and acquisition of intuitive theories from a rational computational perspective. From this viewpoint, theory-like representations are not just a convenient way of summarizing certain aspects of human knowledge. They provide crucial foundations for successful learning and reasoning, and we want to understand how they do so. With this goal in mind, we focus on",
"title": ""
},
{
"docid": "45f120b05b3c48cd95d5dd55031987cb",
"text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.",
"title": ""
},
{
"docid": "d11a113fdb0a30e2b62466c641e49d6d",
"text": "Apache Spark has emerged as the de facto framework for big data analytics with its advanced in-memory programming model and upper-level libraries for scalable machine learning, graph analysis, streaming and structured data processing. It is a general-purpose cluster computing framework with language-integrated APIs in Scala, Java, Python and R. As a rapidly evolving open source project, with an increasing number of contributors from both academia and industry, it is difficult for researchers to comprehend the full body of development and research behind Apache Spark, especially those who are beginners in this area. In this paper, we present a technical review on big data analytics using Apache Spark. This review focuses on the key components, abstractions and features of Apache Spark. More specifically, it shows what Apache Spark has for designing and implementing big data algorithms and pipelines for machine learning, graph analysis and stream processing. In addition, we highlight some research and development directions on Apache Spark for big data analytics.",
"title": ""
},
{
"docid": "5e124199283b333e9b12877fd69dd051",
"text": "One of the major concerns of Integrated Traffic Management System (ITMS) in India is the identification of vehicles violating the stop-line at a road crossing. A large number of Indian vehicles do not stop at the designated stop-line and pose serious threat to the pedestrians crossing the roads. The current work reports the technicalities of the i $$ i $$ LPR (Indian License Plate Recognition) system implemented at five busy road-junctions in one populous metro city in India. The designed system is capable of localizing single line and two-line license plates of various sizes and shapes, recognizing characters of standard/ non-standard fonts and performing seamlessly in varying weather conditions. The performance of the system is evaluated with a large database of images for different environmental conditions. We have published a limited database of Indian vehicle images in http://code.google.com/p/cmaterdb/ for non-commercial use by fellow researchers. Despite unparallel complexity in the Indian city-traffic scenario, we have achieved around 92 % plate localization accuracy and 92.75 % plate level recognition accuracy over the localized vehicle images.",
"title": ""
},
{
"docid": "9cb28706a45251e3d2fb5af64dd9351f",
"text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.",
"title": ""
}
] |
scidocsrr
|
1b0fabf5c29000d15c6e1b2dd6eba2cc
|
Photometric stereo and weather estimation using internet images
|
[
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "f085832faf1a2921eedd3d00e8e592db",
"text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"title": ""
},
{
"docid": "1b6ddffacc50ad0f7e07675cfe12c282",
"text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.",
"title": ""
}
] |
[
{
"docid": "362b1a5119733eba058d1faab2d23ebf",
"text": "§ Mission and structure of the project. § Overview of the Stone Man version of the Guide to the SWEBOK. § Status and development process of the Guide. § Applications of the Guide in the fields of education, human resource management, professional development and licensing and certification. § Class exercise in applying the Guide to defining the competencies needed to support software life cycle process deployment. § Strategy for uptake and promotion of the Guide. § Discussion of promotion, trial usage and experimentation. Workshop Leaders:",
"title": ""
},
{
"docid": "f7ce2995fc0369fb8198742a5f1fefa3",
"text": "In this paper, we present a novel method for multimodal gesture recognition based on neural networks. Our multi-stream recurrent neural network (MRNN) is a completely data-driven model that can be trained from end to end without domain-specific hand engineering. The MRNN extends recurrent neural networks with Long Short-Term Memory cells (LSTM-RNNs) that facilitate the handling of variable-length gestures. We propose a recurrent approach for fusing multiple temporal modalities using multiple streams of LSTM-RNNs. In addition, we propose alternative fusion architectures and empirically evaluate the performance and robustness of these fusion strategies. Experimental results demonstrate that the proposed MRNN outperforms other state-of-theart methods in the Sheffield Kinect Gesture (SKIG) dataset, and has significantly high robustness to noisy inputs.",
"title": ""
},
{
"docid": "43baeb87f1798d52399ba8c78ffa7fef",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "97decda9a345d39e814e19818eebe8b8",
"text": "In this review article, we present some challenges and opportunities in Ambient Assisted Living (AAL) for disabled and elderly people addressing various state of the art and recent approaches particularly in artificial intelligence, biomedical engineering, and body sensor networking.",
"title": ""
},
{
"docid": "7bea13124037f4e21b918f08c81b9408",
"text": "U.S. health care system is plagued by rising cost and limited access. While the cost of care is increasing faster than the rate of inflation, people living in rural areas have very limited access to quality health care due to a shortage of physicians and facilities in these areas. Information and communication technologies in general and telemedicine in particular offer great promise to extend quality care to underserved rural communities at an affordable cost. However, adoption of telemedicine among the various stakeholders of the health care system has not been very encouraging. Based on an analysis of the extant research literature, this study identifies critical factors that impede the adoption of telemedicine, and offers suggestions to mitigate these challenges.",
"title": ""
},
{
"docid": "a2f46b51b65c56acf6768f8e0d3feb79",
"text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "50b6f8067784fe4b9b3adf6db17ab4d1",
"text": "Available online 23 November 2012",
"title": ""
},
{
"docid": "e3e024fa2ee468fb2a64bfc8ddf69467",
"text": "We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.",
"title": ""
},
{
"docid": "f159ee79d20f00194402553758bcd031",
"text": "Recently, narrowband Internet of Things (NB-IoT), one of the most promising low power wide area (LPWA) technologies, has attracted much attention from both academia and industry. It has great potential to meet the huge demand for machine-type communications in the era of IoT. To facilitate research on and application of NB-IoT, in this paper, we design a system that includes NB devices, an IoT cloud platform, an application server, and a user app. The core component of the system is to build a development board that integrates an NB-IoT communication module and a subscriber identification module, a micro-controller unit and power management modules. We also provide a firmware design for NB device wake-up, data sensing, computing and communication, and the IoT cloud configuration for data storage and analysis. We further introduce a framework on how to apply the proposed system to specific applications. The proposed system provides an easy approach to academic research as well as commercial applications.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "4387549562fe2c0833b002d73d9a8330",
"text": "Complex numbers have long been favoured for digital signal processing, yet complex representations rarely appear in deep learning architectures. RNNs, widely used to process time series and sequence information, could greatly benefit from complex representations. We present a novel complex gate recurrent cell. When used together with norm-preserving state transition matrices, our complex gated RNN exhibits excellent stability and convergence properties. We demonstrate competitive performance of our complex gated RNN on the synthetic memory and adding task, as well as on the real-world task of human motion prediction.",
"title": ""
},
{
"docid": "9cbd8a5ac00fc940baa63cf0fb4d2220",
"text": "— The paper presents a technique for anomaly detection in user behavior in a smart-home environment. Presented technique can be used for a service that learns daily patterns of the user and proactively detects unusual situations. We have identified several drawbacks of previously presented models such as: just one type of anomaly-inactivity, intricate activity classification into hierarchy, detection only on a daily basis. Our novelty approach desists these weaknesses, provides additional information if the activity is unusually short/long, at unusual location. It is based on a semi-supervised clustering model that utilizes the neural network Self-Organizing Maps. The input to the system represents data primarily from presence sensors, however also other sensors with binary output may be used. The experimental study is realized on both synthetic data and areal database collected in our own smart-home installation for the period of two months.",
"title": ""
},
{
"docid": "c751115c128fd0776baf212ae19624ff",
"text": "This paper presents a natural language interface to relational database. It introduces some classical NLDBI products and their applications and proposes the architecture of a new NLDBI system including its probabilistic context free grammar, the inside and outside probabilities which can be used to construct the parse tree, an algorithm to calculate the probabilities, and the usage of dependency structures and verb subcategorization in analyzing the parse tree. Some experiment results are given to conclude the paper.",
"title": ""
},
{
"docid": "7d11d25dc6cd2822d7f914b11b7fe640",
"text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.",
"title": ""
},
{
"docid": "a23949a678e49a7e1495d98aae3adef2",
"text": "The continued increase in the usage of Small Scale Digital Devices (SSDDs) to browse the web has made mobile devices a rich potential for digital evidence. Issues may arise when suspects attempt to hide their browsing habits using applications like Orweb - which intends to anonymize network traffic as well as ensure that no browsing history is saved on the device. In this work, the researchers conducted experiments to examine if digital evidence could be reconstructed when the Orweb browser is used as a tool to hide web browsing activates on an Android smartphone. Examinations were performed on both a non-rooted and a rooted Samsung Galaxy S2 smartphone running Android 2.3.3. The results show that without rooting the device, no private web browsing traces through Orweb were found. However, after rooting the device, the researchers were able to locate Orweb browser history, and important corroborative digital evidence was found.",
"title": ""
},
{
"docid": "4b6755737ad43dec49e470220a24236a",
"text": "We address the issue of automatically extracting rhythm descriptors from audio signals, to be eventually used in content-based musical applications such as in the context of MPEG7. Our aim is to approach the comprehension of auditory scenes in raw polyphonic audio signals without preliminary source separation. As a first step towards the automatic extraction of rhythmic structures out of signals taken from the popular music repertoire, we propose an approach for automatically extracting time indexes of occurrences of different percussive timbres in an audio signal. Within this framework, we found that a particular issue lies in the classification of percussive sounds. In this paper, we report on the method currently used to deal with this problem.",
"title": ""
},
{
"docid": "b1a538752056e91fd5800911f36e6eb0",
"text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.",
"title": ""
}
] |
scidocsrr
|
43eb39b8a39919d4867a75fa54b29c66
|
Predicting Suicidal Behavior From Longitudinal Electronic Health Records.
|
[
{
"docid": "1c9644fa4e259da618d5371512f1e73d",
"text": "Suicidal behavior is a leading cause of injury and death worldwide. Information about the epidemiology of such behavior is important for policy-making and prevention. The authors reviewed government data on suicide and suicidal behavior and conducted a systematic review of studies on the epidemiology of suicide published from 1997 to 2007. The authors' aims were to examine the prevalence of, trends in, and risk and protective factors for suicidal behavior in the United States and cross-nationally. The data revealed significant cross-national variability in the prevalence of suicidal behavior but consistency in age of onset, transition probabilities, and key risk factors. Suicide is more prevalent among men, whereas nonfatal suicidal behaviors are more prevalent among women and persons who are young, are unmarried, or have a psychiatric disorder. Despite an increase in the treatment of suicidal persons over the past decade, incidence rates of suicidal behavior have remained largely unchanged. Most epidemiologic research on suicidal behavior has focused on patterns and correlates of prevalence. The next generation of studies must examine synergistic effects among modifiable risk and protective factors. New studies must incorporate recent advances in survey methods and clinical assessment. Results should be used in ongoing efforts to decrease the significant loss of life caused by suicidal behavior.",
"title": ""
}
] |
[
{
"docid": "eb847700cef64d89b88ff57fef9fae4b",
"text": "Software Defined Networking (SDN) is a new programmable network construction technology that enables centrally management and control, which is considered to be the future evolution trend of networks. A modularized carrier-grade SDN controller according to the characteristics of carrier-grade networks is designed and proposed, resolving the problem of controlling large-scale networks of carrier. The modularized architecture offers the system flexibility, scalability and stability. Functional logic of modules and core modules, such as link discovery module and topology module, are designed to meet the carrier's need. Static memory allocation, multi-threads technique and stick-package processing are used to improve the performance of controller, which is C programming language based. Processing logic of the communication mechanism of the controller is introduced, proving that the controller conforms to the OpenFlow specification and has a good interaction with OpenFlow-based switches. A controller cluster management system is used to interact with controllers through the east-west interface in order to manage large-scale networks. Furthermore, the effectiveness and high performance of the work in this paper has been verified by the testing using Cbench testing program. Moreover, the SDN controller we proposed has been running in China Telecom's Cloud Computing Key Laboratory, which showed the good results is achieved.",
"title": ""
},
{
"docid": "7be1f8be2c74c438b1ed1761e157d3a3",
"text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "0574f193736e10b13a22da2d9c0ee39a",
"text": "Preliminary communication In food production industry, forecasting the timing of demands is crucial in planning production scheduling to satisfy customer needs on time. In the literature, several statistical models have been used in demand forecasting in Food and Beverage (F&B) industry and the choice of the most suitable forecasting model remains a central concern. In this context, this article aims to compare the performances between Trend Analysis, Decomposition and Holt-Winters (HW) models for the prediction of a time series formed by a group of jam and sherbet product demands. Data comprised the series of monthly sales from January 2013 to December 2014 obtained from a private company. As performance measures, metric analysis of the Mean Absolute Percentage Error (MAPE) is used. In this study, the HW and Decomposition models obtained better results regarding the performance metrics.",
"title": ""
},
{
"docid": "33db7ac45c020d2a9e56227721b0be70",
"text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "537d6fdfb26e552fb3254addfbb6ac49",
"text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.",
"title": ""
},
{
"docid": "4f0d34e830387947f807213599d47652",
"text": "An essential feature of large scale free graphs, such as the Web, protein-to-protein interaction, brain connectivity, and social media graphs, is that they tend to form recursive communities. The latter are densely connected vertex clusters exhibiting quick local information dissemination and processing. Under the fuzzy graph model vertices are fixed while each edge exists with a given probability according to a membership function. This paper presents Fuzzy Walktrap and Fuzzy Newman-Girvan, fuzzy versions of two established community discovery algorithms. The proposed algorithms have been applied to a synthetic graph generated by the Kronecker model with different termination criteria and the results are discussed. Keywords-Fuzzy graphs; Membership function; Community detection; Termination criteria; Walktrap algorithm; NewmanGirvan algorithm; Edge density; Kronecker model; Large graph analytics; Higher order data",
"title": ""
},
{
"docid": "ca2e577e819ac49861c65bfe8d26f5a1",
"text": "A design of a delay based self-oscillating class-D power amplifier for piezoelectric actuators is presented and modelled. First order and second order configurations are discussed in detail and analytical results reveal the stability criteria of a second order system, which should be respected in the design. It also shows if the second order system converges, it will tend to give a correct pulse modulation regarding to the input modulation index. Experimental results show the effectiveness of this design procedure. For a piezoelectric load of 400 nF, powered by a 150 V 10 kHz sinusoidal signal, a total harmonic distortion (THD) of 4.3% is obtained.",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "acce5017b1138c67e24e661c1eabc185",
"text": "The main goal of the paper is to continuously enlarge the set of software building blocks that can be reused in the search and rescue domain.",
"title": ""
},
{
"docid": "a8a24c602c5f295495b7dc68c606590d",
"text": "This paper deals with the design of an AC-220-volt-mains-fed power supply for ozone generation. A power stage consisting of a buck converter to regulate the output power plus a current-fed parallel-resonant push-pull inverter to supply an ozone generator (OG) is proposed and analysed. A closed-loop operation is presented as a method to compensate variations in the AC source voltage. Inverter's step-up transformer issues and their effect on the performance of the overall circuit are also studied. The use of a UC3872 integrated circuit is proposed to control both the push-pull inverter and the buck converter, as well as to provide the possibility to protect the power supply in case a short circuit, an open-lamp operation or any other circumstance might occur. Implementation of a 100 W prototype and experimental results are shown and discussed.",
"title": ""
},
{
"docid": "93ed81d5244715aaaf402817aa674310",
"text": "Automatically recognized terminology is widely used for various domain-specific texts processing tasks, such as machine translation, information retrieval or ontology construction. However, there is still no agreement on which methods are best suited for particular settings and, moreover, there is no reliable comparison of already developed methods. We believe that one of the main reasons is the lack of state-of-the-art methods implementations, which are usually non-trivial to recreate. In order to address these issues, we present ATR4S, an open-source software written in Scala that comprises more than 15 methods for automatic terminology recognition (ATR) and implements the whole pipeline from text document preprocessing, to term candidates collection, term candidates scoring, and finally, term candidates ranking. It is highly scalable, modular and configurable tool with support of automatic caching. We also compare 13 state-of-the-art methods on 7 open datasets by average precision and processing time. Experimental comparison reveals that no single method demonstrates best average precision for all datasets and that other available tools for ATR do not contain the best methods.",
"title": ""
},
{
"docid": "40cf1e5ecb0e79f466c65f8eaff77cb2",
"text": "Spiral patterns on the surface of a sphere have been seen in laboratory experiments and in numerical simulations of reaction–diffusion equations and convection. We classify the possible symmetries of spirals on spheres, which are quite different from the planar case since spirals typically have tips at opposite points on the sphere. We concentrate on the case where the system has an additional sign-change symmetry, in which case the resulting spiral patterns do not rotate. Spiral patterns arise through a mode interaction between spherical harmonics degree l and l+1. Using the methods of equivariant bifurcation theory, possible symmetry types are determined for each l. For small values of l, the centre manifold equations are constructed and spiral solutions are found explicitly. Bifurcation diagrams are obtained showing how spiral states can appear at secondary bifurcations from primary solutions, or tertiary bifurcations. The results are consistent with numerical simulations of a model pattern-forming system.",
"title": ""
},
{
"docid": "a354b6c03cadf539ccd01a247447ebc1",
"text": "In the present study, we tested in vitro different parts of 35 plants used by tribals of the Similipal Biosphere Reserve (SBR, Mayurbhanj district, India) for the management of infections. From each plant, three extracts were prepared with different solvents (water, ethanol, and acetone) and tested for antimicrobial (E. coli, S. aureus, C. albicans); anthelmintic (C. elegans); and antiviral (enterovirus 71) bioactivity. In total, 35 plant species belonging to 21 families were recorded from tribes of the SBR and periphery. Of the 35 plants, eight plants (23%) showed broad-spectrum in vitro antimicrobial activity (inhibiting all three test strains), while 12 (34%) exhibited narrow spectrum activity against individual pathogens (seven as anti-staphylococcal and five as anti-candidal). Plants such as Alangium salviifolium, Antidesma bunius, Bauhinia racemosa, Careya arborea, Caseria graveolens, Cleistanthus patulus, Colebrookea oppositifolia, Crotalaria pallida, Croton roxburghii, Holarrhena pubescens, Hypericum gaitii, Macaranga peltata, Protium serratum, Rubus ellipticus, and Suregada multiflora showed strong antibacterial effects, whilst Alstonia scholaris, Butea monosperma, C. arborea, C. pallida, Diospyros malbarica, Gmelina arborea, H. pubescens, M. peltata, P. serratum, Pterospermum acerifolium, R. ellipticus, and S. multiflora demonstrated strong antifungal activity. Plants such as A. salviifolium, A. bunius, Aporosa octandra, Barringtonia acutangula, C. graveolens, C. pallida, C. patulus, G. arborea, H. pubescens, H. gaitii, Lannea coromandelica, M. peltata, Melastoma malabathricum, Millettia extensa, Nyctanthes arbor-tristis, P. serratum, P. acerifolium, R. ellipticus, S. multiflora, Symplocos cochinchinensis, Ventilago maderaspatana, and Wrightia arborea inhibit survival of C. elegans and could be a potential source for anthelmintic activity. Additionally, plants such as A. bunius, C. graveolens, C. patulus, C. oppositifolia, H. gaitii, M. extensa, P. serratum, R. ellipticus, and V. maderaspatana showed anti-enteroviral activity. Most of the plants, whose traditional use as anti-infective agents by the tribals was well supported, show in vitro inhibitory activity against an enterovirus, bacteria (E. coil, S. aureus), a fungus (C. albicans), or a nematode (C. elegans).",
"title": ""
},
{
"docid": "30c6829427aaa8d23989afcd666372f7",
"text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and",
"title": ""
},
{
"docid": "5ce00014f84277aca0a4b7dfefc01cbb",
"text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.",
"title": ""
},
{
"docid": "bd62496839434c34bcf876a581d38c37",
"text": "We present results from an experiment similar to one performed by Packard [24], in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton’s λ parameter [17], and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near “critical” λ values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with λ values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to λ, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, New Mexico, U.S.A. 87501. Email: mm@santafe.edu, pth@santafe.edu Physics Department, University of California, Berkeley, CA, U.S.A. 94720. Email: chaos@gojira.berkeley.edu",
"title": ""
},
{
"docid": "c302699cb7dec9f813117bfe62d3b5fb",
"text": "Pipe networks constitute the means of transporting fluids widely used nowadays. Increasing the operational reliability of these systems is crucial to minimize the risk of leaks, which can cause serious pollution problems to the environment and have disastrous consequences if the leak occurs near residential areas. Considering the importance in developing efficient systems for detecting leaks in pipelines, this work aims to detect the characteristic frequencies (predominant) in case of leakage and no leakage. The methodology consisted of capturing the experimental data through a microphone installed inside the pipeline and coupled to a data acquisition card and a computer. The Fast Fourier Transform (FFT) was used as the mathematical approach to the signal analysis from the microphone, generating a frequency response (spectrum) which reveals the characteristic frequencies for each operating situation. The tests were carried out using distinct sizes of leaks, situations without leaks and cases with blows in the pipe caused by metal instruments. From the leakage tests, characteristic peaks were found in the FFT frequency spectrum using the signal generated by the microphone. Such peaks were not observed in situations with no leaks. Therewith, it was realized that it was possible to distinguish, through spectral analysis, an event of leakage from an event without leakage.",
"title": ""
},
{
"docid": "d9fe0834ccf80bddadc5927a8199cd2c",
"text": "Deep Residual Networks (ResNets) have recently achieved state-of-the-art results on many challenging computer vision tasks. In this work we analyze the role of Batch Normalization (BatchNorm) layers on ResNets in the hope of improving the current architecture and better incorporating other normalization techniques, such as Normalization Propagation (NormProp), into ResNets. Firstly, we verify that BatchNorm helps distribute representation learning to residual blocks at all layers, as opposed to a plain ResNet without BatchNorm where learning happens mostly in the latter part of the network. We also observe that BatchNorm well regularizes Concatenated ReLU (CReLU) activation scheme on ResNets, whose magnitude of activation grows by preserving both positive and negative responses when going deeper into the network. Secondly, we investigate the use of NormProp as a replacement for BatchNorm in ResNets. Though NormProp theoretically attains the same effect as BatchNorm on generic convolutional neural networks, the identity mapping of ResNets invalidates its theoretical promise and NormProp exhibits a significant performance drop when naively applied. To bridge the gap between BatchNorm and NormProp in ResNets, we propose a simple modification to NormProp and employ the CReLU activation scheme. We experiment on visual object recognition benchmark datasets such as CIFAR10/100 and ImageNet and demonstrate that 1) the modified NormProp performs better than the original NormProp but is still not comparable to BatchNorm and 2) CReLU improves the performance of ResNets with or without normalizations.",
"title": ""
}
] |
scidocsrr
|
ef9f48caaba38c29329650121b2ef6c8
|
Predictive role of prenasal thickness and nasal bone for Down syndrome in the second trimester.
|
[
{
"docid": "e7315716a56ffa7ef2461c7c99879efb",
"text": "OBJECTIVE\nTo investigate the potential value of ultrasound examination of the fetal profile for present/hypoplastic fetal nasal bone at 15-22 weeks' gestation as a marker for trisomy 21.\n\n\nMETHODS\nThis was an observational ultrasound study in 1046 singleton pregnancies undergoing amniocentesis for fetal karyotyping at 15-22 (median, 17) weeks' gestation. Immediately before amniocentesis the fetal profile was examined to determine if the nasal bone was present or hypoplastic (absent or shorter than 2.5 mm). The incidence of nasal hypoplasia in the trisomy 21 and the chromosomally normal fetuses was determined and the likelihood ratio for trisomy 21 for nasal hypoplasia was calculated.\n\n\nRESULTS\nAll fetuses were successfully examined for the presence of the nasal bone. The nasal bone was hypoplastic in 21/34 (61.8%) fetuses with trisomy 21, in 12/982 (1.2%) chromosomally normal fetuses and in 1/30 (3.3%) fetuses with other chromosomal defects. In 3/21 (14.3%) trisomy 21 fetuses with nasal hypoplasia there were no other abnormal ultrasound findings. In the chromosomally normal group hypoplastic nasal bone was found in 0.5% of Caucasians and in 8.8% of Afro-Caribbeans. The likelihood ratio for trisomy 21 for hypoplastic nasal bone was 50.5 (95% CI 27.1-92.7) and for present nasal bone it was 0.38 (95% CI 0.24-0.56).\n\n\nCONCLUSION\nNasal bone hypoplasia at the 15-22-week scan is associated with a high risk for trisomy 21 and it is a highly sensitive and specific marker for this chromosomal abnormality.",
"title": ""
}
] |
[
{
"docid": "2adf5e06cfc7e6d8cf580bdada485a23",
"text": "This paper describes the comprehensive Terrorism Knowledge Base TM (TKB TM) which will ultimately contain all relevant knowledge about terrorist groups, their members, leaders, affiliations , etc., and full descriptions of specific terrorist events. Led by world-class experts in terrorism , knowledge enterers have, with simple tools, been building the TKB at the rate of up to 100 assertions per person-hour. The knowledge is stored in a manner suitable for computer understanding and reasoning. The TKB also utilizes its reasoning modules to integrate data and correlate observations, generate scenarios, answer questions and compose explanations.",
"title": ""
},
{
"docid": "87133250a9e04fd42f5da5ecacd39d70",
"text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.",
"title": ""
},
{
"docid": "cd0c1507c1187e686c7641388413d3b5",
"text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.",
"title": ""
},
{
"docid": "7e683f15580e77b1e207731bb73b8107",
"text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f2b6afabd67354280d091d11e8265b96",
"text": "This paper aims to present three new methods for color detection and segmentation of road signs. The images are taken by a digital camera mounted in a car. The RGB images are converted into IHLS color space, and new methods are applied to extract the colors of the road signs under consideration. The methods are tested on hundreds of outdoor images in different light conditions, and they show high robustness. This project is part of the research taking place in Dalarna University/Sweden in the field of the ITS",
"title": ""
},
{
"docid": "8f289714182c490b726b8edbbb672cfd",
"text": "Design and implementation of a 15kV sub-nanosecond pulse generator using Trigatron type spark gap as a switch. Straightforward and compact trigger generator using pulse shaping network which produces a trigger pulse of sub-nanosecond rise time. A pulse power system requires delivering a high voltage, high coulomb in short rise time. This is achieved by using pulse shaping network comprises of parallel combinations of capacitors and inductor. Spark gap switches are used to switch the energy from capacitive source to inductive load. The pulse hence generated can be used for synchronization of two or more spark gap. Because of the fast rise time and the high output voltage, the reliability of the synchronization is increased. The analytical calculations, simulation, have been carried out to select the circuit parameters. Simulation results using MATLAB/SIMULINK have been implemented in the experimental setup and sub-nanoseconds output waveforms have been obtained.",
"title": ""
},
{
"docid": "874b14b3c3e15b43de3310327affebaf",
"text": "We present the Accelerated Quadratic Proxy (AQP) - a simple first-order algorithm for the optimization of geometric energies defined over triangular and tetrahedral meshes.\n The main stumbling block of current optimization techniques used to minimize geometric energies over meshes is slow convergence due to ill-conditioning of the energies at their minima. We observe that this ill-conditioning is in large part due to a Laplacian-like term existing in these energies. Consequently, we suggest to locally use a quadratic polynomial proxy, whose Hessian is taken to be the Laplacian, in order to achieve a preconditioning effect. This already improves stability and convergence, but more importantly allows incorporating acceleration in an almost universal way, that is independent of mesh size and of the specific energy considered.\n Experiments with AQP show it is rather insensitive to mesh resolution and requires a nearly constant number of iterations to converge; this is in strong contrast to other popular optimization techniques used today such as Accelerated Gradient Descent and Quasi-Newton methods, e.g., L-BFGS. We have tested AQP for mesh deformation in 2D and 3D as well as for surface parameterization, and found it to provide a considerable speedup over common baseline techniques.",
"title": ""
},
{
"docid": "c7ea816f2bb838b8c5aac3cdbbd82360",
"text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "4a043a02f3fad07797245b0a2c4ea4c5",
"text": "The worldwide population of people over the age of 65 has been predicted to more than double from 1990 to 2025. Therefore, ubiquitous health-care systems have become an important topic of research in recent years. In this paper, an integrated system for portable electrocardiography (ECG) monitoring, with an on-board processor for time–frequency analysis of heart rate variability (HRV), is presented. The main function of proposed system comprises three parts, namely, an analog-to-digital converter (ADC) controller, an HRV processor, and a lossless compression engine. At the beginning, ECG data acquired from front-end circuits through the ADC controller is passed through the HRV processor for analysis. Next, the HRV processor performs real-time analysis of time–frequency HRV using the Lomb periodogram and a sliding window configuration. The Lomb periodogram is suited for spectral analysis of unevenly sampled data and has been applied to time–frequency analysis of HRV in the proposed system. Finally, the ECG data are compressed by 2.5 times using the lossless compression engine before output using universal asynchronous receiver/transmitter (UART). Bluetooth is employed to transmit analyzed HRV data and raw ECG data to a remote station for display or further analysis. The integrated ECG health-care system design proposed has been implemented using UMC 90 nm CMOS technology. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6eb229b17a4634183818ff4a15f981b6",
"text": "Fine-grained image classification is a challenging task due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Most existing fine-grained image classification methods generally learn part detection models to obtain the semantic parts for better classification accuracy. Despite achieving promising results, these methods mainly have two limitations: (1) not all the parts which obtained through the part detection models are beneficial and indispensable for classification, and (2) fine-grained image classification requires more detailed visual descriptions which could not be provided by the part locations or attribute annotations. For addressing the above two limitations, this paper proposes the two-stream model combing vision and language (CVL) for learning latent semantic representations. The vision stream learns deep representations from the original visual information via deep convolutional neural network. The language stream utilizes the natural language descriptions which could point out the discriminative parts or characteristics for each image, and provides a flexible and compact way of encoding the salient visual aspects for distinguishing sub-categories. Since the two streams are complementary, combing the two streams can further achieves better classification accuracy. Comparing with 12 state-of-the-art methods on the widely used CUB-200-2011 dataset for fine-grained image classification, the experimental results demonstrate our CVL approach achieves the best performance.",
"title": ""
},
{
"docid": "06675c4b42683181cecce7558964c6b6",
"text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.",
"title": ""
},
{
"docid": "0d9057d8a40eb8faa7e67128a7d24565",
"text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.",
"title": ""
},
{
"docid": "c0b30475f78acefae1c15f9f5d6dc57b",
"text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.",
"title": ""
},
{
"docid": "898ff77dbfaf00efa3b08779a781aa0b",
"text": "The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks.",
"title": ""
},
{
"docid": "bf4b6cd15c0b3ddb5892f1baea9dec68",
"text": "The purpose of this study was to examine the distribution, abundance and characteristics of plastic particles in plankton samples collected routinely in Northeast Pacific ecosystems, and to contribute to the development of ideas for future research into the occurrence and impact of small plastic debris in marine pelagic ecosystems. Plastic debris particles were assessed from zooplankton samples collected as part of the National Oceanic and Atmospheric Administration's (NOAA) ongoing ecosystem surveys during two research cruises in the Southeast Bering Sea in the spring and fall of 2006 and four research cruises off the U.S. west coast (primarily off southern California) in spring, summer and fall of 2006, and in January of 2007. Nets with 0.505 mm mesh were used to collect surface samples during all cruises, and sub-surface samples during the four cruises off the west coast. The 595 plankton samples processed indicate that plastic particles are widely distributed in surface waters. The proportion of surface samples from each cruise that contained particles of plastic ranged from 8.75 to 84.0%, whereas particles were recorded in sub-surface samples from only one cruise (in 28.2% of the January 2007 samples). Spatial and temporal variability was apparent in the abundance and distribution of the plastic particles and mean standardized quantities varied among cruises with ranges of 0.004-0.19 particles/m³, and 0.014-0.209 mg dry mass/m³. Off southern California, quantities for the winter cruise were significantly higher, and for the spring cruise significantly lower than for the summer and fall surveys (surface data). Differences between surface particle concentrations and mass for the Bering Sea and California coast surveys were significant for pair-wise comparisons of the spring but not the fall cruises. The particles were assigned to three plastic product types: product fragments, fishing net and line fibers, and industrial pellets; and five size categories: <1 mm, 1-2.5 mm, >2.5-5 mm, >5-10 mm, and >10 mm. Product fragments accounted for the majority of the particles, and most were less than 2.5 mm in size. The ubiquity of such particles in the survey areas and predominance of sizes <2.5 mm implies persistence in these pelagic ecosystems as a result of continuous breakdown from larger plastic debris fragments, and widespread distribution by ocean currents. Detailed investigations of the trophic ecology of individual zooplankton species, and their encounter rates with various size ranges of plastic particles in the marine pelagic environment, are required in order to understand the potential for ingestion of such debris particles by these organisms. Ongoing plankton sampling programs by marine research institutes in large marine ecosystems are good potential sources of data for continued assessment of the abundance, distribution and potential impact of small plastic debris in productive coastal pelagic zones.",
"title": ""
},
{
"docid": "0fe02fcc6f68ba1563d3f5d96a8da330",
"text": "We present a novel technique for jointly predicting semantic arguments for lexical predicates. The task is to find the best matching between semantic roles and sentential spans, subject to structural constraints that come from expert linguistic knowledge (e.g., in the FrameNet lexicon). We formulate this task as an integer linear program (ILP); instead of using an off-the-shelf tool to solve the ILP, we employ a dual decomposition algorithm, which we adapt for exact decoding via a branch-and-bound technique. Compared to a baseline that makes local predictions, we achieve better argument identification scores and avoid all structural violations. Runtime is nine times faster than a proprietary ILP solver.",
"title": ""
},
{
"docid": "e1b6cc1dbd518760c414cd2ddbe88dd5",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] |
scidocsrr
|
b333dcddc559ebcf28b6f58e4124b6fa
|
Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds
|
[
{
"docid": "634b30b81da7139082927109b4c22d5e",
"text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
}
] |
[
{
"docid": "489aa160c450539b50c63c6c3c6993ab",
"text": "Adequacy of citations is very important for a scientific paper. However, it is not an easy job to find appropriate citations for a given context, especially for citations in different languages. In this paper, we define a novel task of cross-language context-aware citation recommendation, which aims at recommending English citations for a given context of the place where a citation is made in a Chinese paper. This task is very challenging because the contexts and citations are written in different languages and there exists a language gap when matching them. To tackle this problem, we propose the bilingual context-citation embedding algorithm (i.e. BLSRec-I), which can learn a low-dimensional joint embedding space for both contexts and citations. Moreover, two advanced algorithms named BLSRec-II and BLSRec-III are proposed by enhancing BLSRec-I with translation results and abstract information, respectively. We evaluate the proposed methods based on a real dataset that contains Chinese contexts and English citations. The results demonstrate that our proposed algorithms can outperform a few baselines and the BLSRec-II and BLSRec-III methods can outperform the BLSRec-I method.",
"title": ""
},
{
"docid": "5e7a87078f92b7ce145e24a2e7340f1b",
"text": "Unsupervised artificial neural networks are now considered as a likely alternative to classical computing models in many application domains. For example, recent neural models defined by neuro-scientists exhibit interesting properties for an execution in embedded and autonomous systems: distributed computing, unsupervised learning, self-adaptation, self-organisation, tolerance. But these properties only emerge from large scale and fully connected neural maps that result in intensive computation coupled with high synaptic communications. We are interested in deploying these powerful models in the embedded context of an autonomous bio-inspired robot learning its environment in realtime. So we study in this paper in what extent these complex models can be simplified and deployed in hardware accelerators compatible with an embedded integration. Thus we propose a Neural Processing Unit designed as a programmable accelerator implementing recent equations close to self-organizing maps and neural fields. The proposed architecture is validated on FPGA devices and compared to state of the art solutions. The trade-off proposed by this dedicated but programmable neural processing unit allows to achieve significant improvements and makes our architecture adapted to many embedded systems.",
"title": ""
},
{
"docid": "014759efa636aec38aa35287b61e44a4",
"text": "Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection",
"title": ""
},
{
"docid": "8476c0832f62e061cf2e63f61e59abf0",
"text": "OBJECTIVE\nThis study examined the effectiveness of using a weighted vest for increasing attention to a fine motor task and decreasing self-stimulatory behaviors in preschool children with pervasive developmental disorders (PDD).\n\n\nMETHOD\nUsing an ABA single-subject design, the duration of attention to task and self-stimulatory behaviors and the number of distractions were measured in five preschool children with PDD over a period of 6 weeks.\n\n\nRESULTS\nDuring the intervention phase, all participants displayed a decrease in the number of distractions and an increase in the duration of focused attention while wearing the weighted vest. All but 1 participant demonstrated a decrease in the duration of self-stimulatory behaviors while wearing a weighted vest; however, the type of self-stimulatory behaviors changed and became less self-abusive for this child while she wore the vest. During the intervention withdrawal phase, 3 participants experienced an increase in the duration of self-stimulatory behaviors, and all participants experienced an increase in the number of distractions and a decrease in the duration of focused attention. The increase or decrease, however, never returned to baseline levels for these behaviors.\n\n\nCONCLUSION\nThe findings suggest that for these 5 children with PDD, the use of a weighted vest resulted in an increase in attention to task and decrease in self-stimulatory behaviors. The most consistent improvement observed was the decreased number of distractions. Additional research is necessary to build consensus about the effectiveness of wearing a weighted vest to increase attention to task and decrease self-stimulatory behaviors for children with PDD.",
"title": ""
},
{
"docid": "b9ec6867c23e5e5ecf53a4159872747c",
"text": "Competition in the wireless telecommunications industry is rampant. To maintain profitability, wireless carriers must control churn, the loss of subscribers who switch from one carrier to another. We explore statistical techniques for churn prediction and, based on these predictions, an optimal policy for identifying customers to whom incentives should be offered to increase retention. Our experiments are based on a data base of nearly 47,000 U.S. domestic subscribers, and includes information about their usage, billing, credit, application, and complaint history. We show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, churn prediction and remediation can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Competition in the wireless telecommunications industry is rampant. As many as seven competing carriers operate in each market. The industry is extremely dynamic, with new services, technologies, and carriers constantly altering the landscape. Carriers announce new rates and incentives weekly, hoping to entice new subscribers and to lure subscribers away from the competition. The extent of rivalry is reflected in the deluge of advertisements for wireless service in the daily newspaper and other mass media. The United States had 69 million wireless subscribers in 1998, roughly 25% of the population. Some markets are further developed; for example, the subscription rate in Finland is 53%. Industry forecasts are for a U.S. penetration rate of 48% by 2003. Although there is significant room for growth in most markets, the industry growth rate is declining and competition is rising. Consequently, it has become crucial for wireless carriers to control churn—the loss of customers who switch from one carrier to another. At present, domestic monthly churn rates are 2-3% of the customer base. At an average cost of $400 to acquire a subscriber, churn cost the industry nearly $6.3 billion in 1998; the total annual loss rose to nearly $9.6 billion when lost monthly revenue from subscriber cancellations is considered (Luna, 1998). It costs roughly five times as much to sign on a new subscriber as to retain an existing one. Consequently, for a carrier with 1.5 million subscribers, reducing the monthly churn rate from 2% to 1% would yield an increase in annual earnings of at least $54 million, and an increase in shareholder value of approximately $150 million. (Estimates are even higher when lost monthly revenue is considered; see Fowlkes, Madan, Andrew, & Jensen, 1999; Luna, 1998.) The goal of our research is to evaluate the benefits of predicting churn using techniques from statistical machine learning. We designed models that predict the probability Mozer, M. C., Wolniewicz, R., Grimes, D. B., Johnson, E., & Kaushansky, H. (2000). Churn reduction in the wireless industry. In S. A. Solla, T. K. Leen, & K.-R. Mueller (Eds.), Advances in Neural Information Processing Systems 12 (pp. 935941). Cambridge, MA: MIT Press. of a subscriber churning within a short time window, and we evaluated how well these predictions could be used for decision making by estimating potential cost savings to the wireless carrier under a variety of assumptions concerning subscriber behavior.",
"title": ""
},
{
"docid": "850854aeae187ffdd74c56135d9a4d5b",
"text": "Dynamic interactive maps with transparent but powerful human interface capabilities are beginning to emerge for a variety of geographical information systems, including ones situated on portables for travelers, students, business and service people, and others working in field settings. In the present research, interfaces supporting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems. Input modality (speech, writing, multimodal) and map display format (highly versus minimally structured) were varied in a within-subject factorial design as people completed realistic tasks with a simulated map system. The results identified a constellation of performance difficulties associated with speech-only map interactions, including elevated performance errors, spontaneous disfluencies, and lengthier task completion t ime-problems that declined substantially when people could interact multimodally with the map. These performance advantages also mirrored a strong user preference to interact multimodally. The error-proneness and unacceptability of speech-only input to maps was attributed in large part to people's difficulty generating spoken descriptions of spatial location. Analyses also indicated that map display format can be used to minimize performance errors and disfluencies, and map interfaces that guide users' speech toward brevity can nearly eliminate disfiuencies. Implications of this research are discussed for the design of high-performance multimodal interfaces for future map",
"title": ""
},
{
"docid": "87552ea79b92986de3ce5306ef0266bc",
"text": "This paper presents a novel secondary frequency and voltage control method for islanded microgrids based on distributed cooperative control. The proposed method utilizes a sparse communication network where each DG unit only requires local and its neighbors’ information to perform control actions. The frequency controller restores the system frequency to the nominal value while maintaining the equal generation cost increment value among DG units. The voltage controller simultaneously achieves the critical bus voltage restoration and accurate reactive power sharing. Subsequently, the case when the DG unit ac-side voltage reaches its limit value is discussed and a controller output limitation method is correspondingly provided to selectively realize the desired control objective. This paper also provides a small-signal dynamic model of the microgrid with the proposed controller to evaluate the system dynamic performance. Finally, simulation results on a microgrid test system are presented to validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "e75b7c2fcdfc19a650d7da4e6ae643a2",
"text": "With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services.",
"title": ""
},
{
"docid": "b41d56e726628673d12b9efcb715a69c",
"text": "Ten new phenylpropanoid glucosides, tadehaginosides A-J (1-10), and the known compound tadehaginoside (11) were obtained from Tadehagi triquetrum. These phenylpropanoid glucosides were structurally characterized through extensive physical and chemical analyses. Compounds 1 and 2 represent the first set of dimeric derivatives of tadehaginoside with an unusual bicyclo[2.2.2]octene skeleton, whereas compounds 3 and 4 contain a unique cyclobutane basic core in their carbon scaffolds. The effects of these compounds on glucose uptake in C2C12 myotubes were evaluated. Compounds 3-11, particularly 4, significantly increased the basal and insulin-elicited glucose uptake. The results from molecular docking, luciferase analyses, and ELISA indicated that the increased glucose uptake may be due to increases in peroxisome proliferator-activated receptor γ (PPARγ) activity and glucose transporter-4 (GLUT-4) expression. These results indicate that the isolated phenylpropanoid glucosides, particularly compound 4, have the potential to be developed into antidiabetic compounds.",
"title": ""
},
{
"docid": "b97208934c9475bc9d9bb3a095826a15",
"text": "Article history: Received 12 February 2014 Received in revised form 13 August 2014 Accepted 29 August 2014 Available online 8 September 2014",
"title": ""
},
{
"docid": "2c226c7be6acf725190c72a64bfcdf91",
"text": "The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.",
"title": ""
},
{
"docid": "d87f336cc82cbd29df1f04095d98a7fb",
"text": "The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which “when a measure becomes a target, it ceases to be a good measure.” In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success. Multimedia Links I Interactive Data Visualization I Code Tutorials I Fields-of-Study Features Table",
"title": ""
},
{
"docid": "1fba9ed825604e8afde8459a3d3dc0c0",
"text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.",
"title": ""
},
{
"docid": "b75336a7470fe2b002e742dbb6bfa8d5",
"text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.",
"title": ""
},
{
"docid": "238620ca0d9dbb9a4b11756630db5510",
"text": "this planet and many oceanic and maritime applications seem relatively slow in exploiting the state-of-the-art info-communication technologies. The natural and man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like sensor networks as an economically viable alternative to currently adopted and costly methods used in seismic monitoring, structural health monitoring, installation and mooring, etc. Underwater sensor networks (UWSNs) are the enabling technology for wide range of applications like monitoring the strong influences and impact of climate regulation, nutrient production, oil retrieval and transportation The underwater environment differs from the terrestrial radio environment both in terms of its energy costs and channel propagation phenomena. The underwater channel is characterized by long propagation times and frequency-dependent attenuation that is highly affected by the distance between nodes as well as by the link orientation. Some of other issues in which UWSNs differ from terrestrial are limited bandwidth, constrained battery power, more failure of sensors because of fouling and corrosion, etc. This paper presents several fundamental key aspects and architectures of UWSNs, emerging research issues of underwater sensor networks and exposes the researchers into networking of underwater communication devices for exciting ocean monitoring and exploration applications. I. INTRODUCTION The Earth is a water planet. Around 70% of the surface of earth is covered by water. This is largely unexplored area and recently it has fascinated humans to explore it. Natural or man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like wireless sensor",
"title": ""
},
{
"docid": "85657981b55e3a87e74238cd373b3db6",
"text": "INTRODUCTION\nLung cancer mortality rates remain at unacceptably high levels. Although mitochondrial dysfunction is a characteristic of most tumor types, mitochondrial dynamics are often overlooked. Altered rates of mitochondrial fission and fusion are observed in lung cancer and can influence metabolic function, proliferation and cell survival.\n\n\nAREAS COVERED\nIn this review, the authors outline the mechanisms of mitochondrial fission and fusion. They also identify key regulatory proteins and highlight the roles of fission and fusion in metabolism and other cellular functions (e.g., proliferation, apoptosis) with an emphasis on lung cancer and the interaction with known cancer biomarkers. They also examine the current therapeutic strategies reported as altering mitochondrial dynamics and review emerging mitochondria-targeted therapies.\n\n\nEXPERT OPINION\nMitochondrial dynamics are an attractive target for therapeutic intervention in lung cancer. Mitochondrial dysfunction, despite its molecular heterogeneity, is a common abnormality of lung cancer. Targeting mitochondrial dynamics can alter mitochondrial metabolism, and many current therapies already non-specifically affect mitochondrial dynamics. A better understanding of mitochondrial dynamics and their interaction with currently identified cancer 'drivers' such as Kirsten-Rat Sarcoma Viral Oncogene homolog will lead to the development of novel therapeutics.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "809384abcd6e402c1b30c3d2dfa75aa1",
"text": "Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”1 Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importanceaspsychiatricresearchfocusedonpharmacology,genomics, and neuroscience, and much of psychiatric practicehasbecomeaseriesofbriefclinical interactionsfocused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools. Some recent changes in both research and practice are promising. The National Institute of Mental Health has led an effort to create a new diagnostic approach for researchers that is intended to combine biological, behavioral, and social factors to create “precision medicine for psychiatry.”2 Although this Research Domain Criteria project has been controversial, the ensuing debate has been",
"title": ""
},
{
"docid": "64d3ecaa2f9e850cb26aac0265260aff",
"text": "The case of the Frankfurt Airport attack in 2011 in which a 21-year-old man shot several U.S. soldiers, murdering 2 U.S. airmen and severely wounding 2 others, is assessed with the Terrorist Radicalization Assessment Protocol (TRAP-18). The study is based on an extensive qualitative analysis of investigation and court files focusing on the complex interconnection among offender personality, specific opportunity structures, and social contexts. The role of distal psychological factors and proximal warning behaviors in the run up to the deed are discussed. Although in this case the proximal behaviors of fixation on a cause and identification as a “soldier” for the cause developed over years, we observed only a very brief and accelerated pathway toward the violent act. This represents an important change in the demands placed upon threat assessors.",
"title": ""
}
] |
scidocsrr
|
f97d72f8e43ed080e21db780ff110aa4
|
Tropical rat mites (Ornithonyssus bacoti) - serious ectoparasites.
|
[
{
"docid": "5d7d7a49b254e08c95e40a3bed0aa10e",
"text": "Five mentally handicapped individuals living in a home for disabled persons in Southern Germany were seen in our outpatient department with pruritic, red papules predominantly located in groups on the upper extremities, neck, upper trunk and face. Over several weeks 40 inhabitants and 5 caretakers were affected by the same rash. Inspection of their home and the sheds nearby disclosed infestation with rat populations and mites. Finally the diagnosis of tropical rat mite dermatitis was made by the identification of the arthropod Ornithonyssus bacoti or so-called tropical rat mite. The patients were treated with topical corticosteroids and antihistamines. After elimination of the rats and disinfection of the rooms by a professional exterminator no new cases of rat mite dermatitis occurred. The tropical rat mite is an external parasite occurring on rats, mice, gerbils, hamsters and various other small mammals. When the principal animal host is not available, human beings can become the victim of mite infestation.",
"title": ""
}
] |
[
{
"docid": "447e62529ed6b1b428e6edd78aabb637",
"text": "Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.",
"title": ""
},
{
"docid": "7d0dfce24bd539cb790c0c25348d075d",
"text": "When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of X conditional on Y = 1, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many realworld applications, the observed positive examples are dependent on the conditional probability P (Y = 1|X) and should be sampled biasedly. In this paper, we assume that a positive example with a higher P (Y = 1|X) is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Specically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classier with a consistency guarantee. e relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. e proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets. ∗UBTECH Sydney Articial Intelligence Centre and the School of Information Technologies, Faculty of Engineering and Information Technologies, e University of Sydney, Darlington, NSW 2008, Australia, fehe7727@uni.sydney.edu.au; tongliang.liu@sydney.edu.au; dacheng.tao@sydney.edu.au. †Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia, geo.webb@monash.edu. 1 ar X iv :1 80 8. 02 18 0v 1 [ cs .L G ] 7 A ug 2 01 8",
"title": ""
},
{
"docid": "af0178d0bb154c3995732e63b94842ca",
"text": "Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.",
"title": ""
},
{
"docid": "b4ac5df370c0df5fdb3150afffd9158b",
"text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.",
"title": ""
},
{
"docid": "7fe0c40d6f62d24b4fb565d3341c1422",
"text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.",
"title": ""
},
{
"docid": "f01a1679095a163894660cb0748334d3",
"text": "We present a novel approach for event extraction and abstraction from movie descriptions. Our event frame consists of ‘who”, “did what” “to whom”, “where”, and “when”. We formulate our problem using a recurrent neural network, enhanced with structural features extracted from syntactic parser, and trained using curriculum learning by progressively increasing the difficulty of the sentences. Our model serves as an intermediate step towards question answering systems, visual storytelling, and story completion tasks. We evaluate our approach on MovieQA dataset.",
"title": ""
},
{
"docid": "130efef512294d14094a900693efebfd",
"text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.",
"title": ""
},
{
"docid": "c8e23bc60783125d5bf489cddd3e8290",
"text": "An efficient probabilistic algorithm for the concurrent mapping and localization problem that arises in mobile robotics is presented. The algorithm addresses the problem in which a team of robots builds a map on-line while simultaneously accommodating errors in the robots’ odometry. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an on-line algorithm that can cope with large odometric errors typically found when mapping environments with cycles. The algorithm can be implemented in a distributed manner on multiple robot platforms, enabling a team of robots to cooperatively generate a single map of their environment. Finally, an extension is described for acquiring three-dimensional maps, which capture the structure and visual appearance of indoor environments in three dimensions. KEY WORDS—mobile robotics, map acquisition, localization, robotic exploration, multi-robot systems, threedimensional modeling",
"title": ""
},
{
"docid": "b69f7c0db77c3012ae5e550b23a313fb",
"text": "Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters - wavelet denoising, total variation filtering, and anisotropic diffusion - and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "e36e0c8659b8bae3acf0f178fce362c3",
"text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.",
"title": ""
},
{
"docid": "56c5ec77f7b39692d8b0d5da0e14f82a",
"text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.",
"title": ""
},
{
"docid": "9d37baf5ce33826a59cc7bd0fd7955c0",
"text": "A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.",
"title": ""
},
{
"docid": "d46434bbbf73460bf422ebe4bd65b590",
"text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.",
"title": ""
},
{
"docid": "7830c4737197e84a247349f2e586424e",
"text": "This paper describes VPL, a Virtual Programming Lab module for Moodle, developed at the University of Las Palmas of Gran Canaria (ULPGC) and released for free uses under GNU/GPL license. For the students, it is a simple development environment with auto evaluation capabilities. For the instructors, it is a students' work management system, with features to facilitate the preparation of assignments, manage the submissions, check for plagiarism, and do assessments with the aid of powerful and flexible assessment tools based on program testing, all of that being independent of the programming language used for the assignments and taken into account critical security issues.",
"title": ""
},
{
"docid": "1241bc6b7d3522fe9e285ae843976524",
"text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.",
"title": ""
},
{
"docid": "51cd0219f96b4ae6984df37ed439bbaa",
"text": "This paper introduces an unsupervised framework to extract semantically rich features for video representation. Inspired by how the human visual system groups objects based on motion cues, we propose a deep convolutional neural network that disentangles motion, foreground and background information. The proposed architecture consists of a 3D convolutional feature encoder for blocks of 16 frames, which is trained for reconstruction tasks over the first and last frames of the sequence. A preliminary supervised experiment was conducted to verify the feasibility of proposed method by training the model with a fraction of videos from the UCF-101 dataset taking as ground truth the bounding boxes around the activity regions. Qualitative results indicate that the network can successfully segment foreground and background in videos as well as update the foreground appearance based on disentangled motion features. The benefits of these learned features are shown in a discriminative classification task, where initializing the network with the proposed pretraining method outperforms both random initialization and autoencoder pretraining. Our model and source code are publicly available at https: //allenovo.github.io/cvprw17_webpage/ .",
"title": ""
},
{
"docid": "ad9a94a4deafceedccdd5f4164cde293",
"text": "In this paper, we investigate the application of machine learning techniques and word embeddings to the task of Recognizing Textual Entailment (RTE) in Social Media. We look at a manually labeled dataset (Lendvai et al., 2016) consisting of user generated short texts posted on Twitter (tweets) and related to four recent media events (the Charlie Hebdo shooting, the Ottawa shooting, the Sydney Siege, and the German Wings crash) and test to what extent neural techniques and embeddings are able to distinguish between tweets that entail or contradict each other or that claim unrelated things. We obtain comparable results to the state of the art in a train-test setting, but we show that, due to the noisy aspect of the data, results plummet in an evaluation strategy crafted to better simulate a real-life train-test scenario.",
"title": ""
},
{
"docid": "896fe681f79ef025a6058a51dd4f19c0",
"text": "Semantic parsing is the construction of a complete, formal, symbolic meaning representation of a sentence. While it is crucial to natural language understanding, the problem of semantic parsing has received relatively little attention from the machine learning community. Recent work on natural language understanding has mainly focused on shallow semantic analysis, such as word-sense disambiguation and semantic role labeling. Semantic parsing, on the other hand, involves deep semantic analysis in which word senses, semantic roles and other components are combined to produce useful meaning representations for a particular application domain (e.g. database query). Prior research in machine learning for semantic parsing is mainly based on inductive logic programming or deterministic parsing, which lack some of the robustness that characterizes statistical learning. Existing statistical approaches to semantic parsing, however, are mostly concerned with relatively simple application domains in which a meaning representation is no more than a single semantic frame. In this proposal, we present a novel statistical approach to semantic parsing, WASP, which can handle meaning representations with a nested structure. The WASP algorithm learns a semantic parser given a set of sentences annotated with their correct meaning representations. The parsing model is based on the synchronous context-free grammar, where each rule maps a natural-language substring to its meaning representation. The main innovation of the algorithm is its use of state-of-the-art statistical machine translation techniques. A statistical word alignment model is used for lexical acquisition, and the parsing model itself can be seen as an instance of a syntax-based translation model. In initial evaluation on several real-world data sets, we show that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order. In future work, we intend to pursue several directions in developing accurate semantic parsers for a variety of application domains. This will involve exploiting prior knowledge about the natural-language syntax and the application domain. We also plan to construct a syntax-aware word-based alignment model for lexical acquisition. Finally, we will generalize the learning algorithm to handle contextdependent sentences and accept noisy training data.",
"title": ""
},
{
"docid": "6a455fd9c86feb287a3c5a103bb681de",
"text": "This paper presents two approaches to semantic search by incorporating Linked Data annotations of documents into a Generalized Vector Space Model. One model exploits taxonomic relationships among entities in documents and queries, while the other model computes term weights based on semantic relationships within a document. We publish an evaluation dataset with annotated documents and queries as well as user-rated relevance assessments. The evaluation on this dataset shows significant improvements of both models over traditional keyword based search.",
"title": ""
}
] |
scidocsrr
|
df7ea4f56972e28521968146f39b8ee3
|
Machine Learning-based Software Testing: Towards a Classification Framework
|
[
{
"docid": "112ecbb8547619577962298fbe65eae1",
"text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box, Category-Partition testing, we propose a methodology and a tool based on machine learning that has shown promising results on a case study involving students as testers. 2009 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "b886b54f77168eab82e449b7e5cd3aac",
"text": "BACKGROUND\nLow desire is the most common sexual problem in women at midlife. Prevalence data are limited by lack of validated instruments or exclusion of un-partnered or sexually inactive women.\n\n\nAIM\nTo document the prevalence of and factors associated with low desire, sexually related personal distress, and hypoactive sexual desire dysfunction (HSDD) using validated instruments.\n\n\nMETHODS\nCross-sectional, nationally representative, community-based sample of 2,020 Australian women 40 to 65 years old.\n\n\nOUTCOMES\nLow desire was defined as a score no higher than 5.0 on the desire domain of the Female Sexual Function Index (FSFI); sexually related personal distress was defined as a score of at least 11.0 on the Female Sexual Distress Scale-Revised; and HSDD was defined as a combination of these scores. The Menopause Specific Quality of Life Questionnaire was used to document menopausal vasomotor symptoms. The Beck Depression Inventory-II was used to identify moderate to severe depressive symptoms (score ≥ 20).\n\n\nRESULTS\nThe prevalence of low desire was 69.3% (95% CI = 67.3-71.3), that of sexually related personal distress was 40.5% (95% CI = 38.4-42.6), and that of HSDD was 32.2% (95% CI = 30.1-34.2). Of women who were not partnered or sexually active, 32.4% (95% CI = 24.4-40.2) reported sexually related personal distress. Factors associated with HSDD in an adjusted logistic regression model included being partnered (odds ratio [OR] = 3.30, 95% CI = 2.46-4.41), consuming alcohol (OR = 1.48, 95% CI = 1.16-1.89), vaginal dryness (OR = 2.08, 95% CI = 1.66-2.61), pain during or after intercourse (OR = 1.63, 95% CI = 1.27-2.09), moderate to severe depressive symptoms (OR = 2.69, 95% CI 1.99-3.64), and use of psychotropic medication (OR = 1.42, 95% CI = 1.10-1.83). Vasomotor symptoms were not associated with low desire, sexually related personal distress, or HSDD.\n\n\nCLINICAL IMPLICATIONS\nGiven the high prevalence, clinicians should screen midlife women for HSDD.\n\n\nSTRENGTHS AND LIMITATIONS\nStrengths include the large size and representative nature of the sample and the use of validated tools. Limitations include the requirement to complete a written questionnaire in English. Questions within the FSFI limit the applicability of FSFI total scores, but not desire domain scores, in recently sexually inactive women, women without a partner, and women who do not engage in penetrative intercourse.\n\n\nCONCLUSIONS\nLow desire, sexually related personal distress, and HSDD are common in women at midlife, including women who are un-partnered or sexually inactive. Some factors associated with HSDD, such as psychotropic medication use and vaginal dryness, are modifiable or can be treated with safe and effective therapies. Worsley R, Bell RJ, Gartoulla P, Davis SR. Prevalence and Predictors of Low Sexual Desire, Sexually Related Personal Distress, and Hypoactive Sexual Desire Dysfunction in a Community-Based Sample of Midlife Women. J Sex Med 2017;14:675-686.",
"title": ""
},
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "8d4bf1b8b45bae6c506db5339e6d9025",
"text": "Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrixmatrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.",
"title": ""
},
{
"docid": "cb26bb277afc6d521c4c5960b35ed77d",
"text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.",
"title": ""
},
{
"docid": "6131fdbfe28aaa303b1ee4c29a65f766",
"text": "Destination prediction is an essential task for many emerging location based applications such as recommending sightseeing places and targeted advertising based on destination. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, existing techniques using this approach suffer from the “data sparsity problem”, i.e., the available historical trajectories is far from being able to cover all possible trajectories. This problem considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) algorithm to address the data sparsity problem. SubSyn algorithm first decomposes historical trajectories into sub-trajectories comprising two neighbouring locations, and then connects the sub-trajectories into “synthesised” trajectories. The number of query trajectories that can have predicted destinations is exponentially increased by this means. Experiments based on real datasets show that SubSyn algorithm can predict destinations for up to ten times more query trajectories than a baseline algorithm while the SubSyn prediction algorithm runs over two orders of magnitude faster than the baseline algorithm. In this paper, we also consider the privacy protection issue in case an adversary uses SubSyn algorithm to derive sensitive location information of users. We propose an efficient algorithm to select a minimum number of locations a user has to hide on her trajectory in order to avoid privacy leak. Experiments also validate the high efficiency of the privacy protection algorithm.",
"title": ""
},
{
"docid": "aefade278a0af130e0c7923b704e2ee1",
"text": "Prediction of the risk in patients with upper gastrointestinal bleeding has been the subject of different studies for several decades. This study showed the significance of Forrest classification, used in initial endoscopic investigation for evaluation of bleeding lesion, for the prediction of rebleeding. Rockall and Blatchford risk score systems evaluate certain clinical, biochemical and endoscopic variables significant for the prediction of rebleeding as well as the final outcome of disease. The percentage of rebleeding in the group of studied patients in accordance with Forrest classification showed that the largest number of patients belonged to the FIIb group. The predictive evaluation of initial and definitive Rockall score was significantly associated with percentage of rebleeding, while Blatchfor score had boundary significance. Acta Medica Medianae 2007;46(4):38-43.",
"title": ""
},
{
"docid": "865cfae2da5ad3d1d10d21b1defdc448",
"text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.",
"title": ""
},
{
"docid": "0525d981721fc8a85bb4daef78b6cbe9",
"text": "Cloud computing environments provide on-demand resource provisioning, allowing applications to elastically scale. However, application benchmarks currently being used to test cloud management systems are not designed for this purpose. This results in resource underprovisioning and quality-of-service (QoS) violations when systems tested using these benchmarks are deployed in production environments. We present C-MART, a benchmark designed to emulate a modern web application running in a cloud computing environment. It is designed using the cloud computing paradigm of elastic scalability at every application tier and utilizes modern web-based technologies such as HTML5, AJAX, jQuery, and SQLite. C-MART consists of a web application, client emulator, deployment server, and scaling API. The deployment server automatically deploys and configures the test environment in orders of magnitude less time than current benchmarks. The scaling API allows users to define and provision their own customized datacenter. The client emulator generates the web workload for the application by emulating complex and varied client behaviors, including decisions based on page content and prior history. We show that C-MART can detect problems in management systems that previous benchmarks fail to identify, such as an increase from 4.4 to 50 percent error in predicting server CPU utilization and resource underprovisioning in 22 percent of QoS measurements.",
"title": ""
},
{
"docid": "c988dc0e9be171a5fcb555aedcdf67e3",
"text": "Online social networks, such as Facebook, are increasingly utilized by many people. These networks allow users to publish details about themselves and to connect to their friends. Some of the information revealed inside these networks is meant to be private. Yet it is possible to use learning algorithms on released data to predict private information. In this paper, we explore how to launch inference attacks using released social networking data to predict private information. We then devise three possible sanitization techniques that could be used in various situations. Then, we explore the effectiveness of these techniques and attempt to use methods of collective inference to discover sensitive attributes of the data set. We show that we can decrease the effectiveness of both local and relational classification algorithms by using the sanitization methods we described.",
"title": ""
},
{
"docid": "f262aba2003f986012bbec1a9c2fcb83",
"text": "Hemiplegic migraine is a rare form of migraine with aura that involves motor aura (weakness). This type of migraine can occur as a sporadic or a familial disorder. Familial forms of hemiplegic migraine are dominantly inherited. Data from genetic studies have implicated mutations in genes that encode proteins involved in ion transportation. However, at least a quarter of the large families affected and most sporadic cases do not have a mutation in the three genes known to be implicated in this disorder, suggesting that other genes are still to be identified. Results from functional studies indicate that neuronal hyperexcitability has a pivotal role in the pathogenesis of hemiplegic migraine. The clinical manifestations of hemiplegic migraine range from attacks with short-duration hemiparesis to severe forms with recurrent coma and prolonged hemiparesis, permanent cerebellar ataxia, epilepsy, transient blindness, or mental retardation. Diagnosis relies on a careful patient history and exclusion of potential causes of symptomatic attacks. The principles of management are similar to those for common varieties of migraine, except that vasoconstrictors, including triptans, are historically contraindicated but are often used off-label to stop the headache, and prophylactic treatment can include lamotrigine and acetazolamide.",
"title": ""
},
{
"docid": "1b60ded506c85edd798fe0759cce57fa",
"text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.",
"title": ""
},
{
"docid": "67d41a84050f3bf9bc004e7c1787a2bc",
"text": "Facial aging is a complex process individualized by interaction with exogenous and endogenous factors. The upper lip is one of the facial components by which facial attractiveness is defined. Upper lip aging is significantly influenced by maxillary bone and teeth. Aging of the cutaneous part can be aggravated by solar radiation and smoking. We provide a review about minimally invasive techniques for correction of aging signs of the upper lip with a tailored approach to patient’s characteristics. The treatment is based upon use of fillers, laser, and minor surgery. Die Alterung des Gesichts ist ein komplexer Prozess, welcher durch die Wechselwirkung exogener und endogener Faktoren individuell geprägt wird. Die Oberlippe zählt zu den fazialen Komponenten, welche die Attraktivität des Gesichts definieren. Die Alterung der Oberlippe wird durch den Oberkieferknochen und die Zähne beeinflusst. Alterungsprozesse des kutanen Anteils können durch Sonnenbestrahlung und Rauchen aggraviert werden. Die Autoren stellen eine Übersicht zur den minimalinvasiven Verfahren der Korrektur altersbedingter Veränderungen der Oberlippe mit Individualisierung je nach Patientenmerkmalen vor. Die Technik basiert auf der Nutzung von Fillern, Lasern und kleineren chirurgischen Eingriffen.",
"title": ""
},
{
"docid": "572be2eb18bd929c2b4e482f7d3e0754",
"text": "• Supervised learning --where the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate the behavior of) a function which maps a vector into one of several classes by looking at several input-output examples of the function. • Unsupervised learning --which models a set of inputs: labeled examples are not available. • Semi-supervised learning --which combines both labeled and unlabeled examples to generate an appropriate function or classifier. • Reinforcement learning --where the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. • Transduction --similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and new inputs. • Learning to learn --where the algorithm learns its own inductive bias based on previous experience.",
"title": ""
},
{
"docid": "47251c2ce233226b015a2482847dc48d",
"text": "Recent advances in computer graphics have made it possible to visualize mathematical models of biological structures and processes with unprecedented realism. The resulting images, animations, and interactive systems are useful as research and educational tools in developmental biology and ecology. Prospective applications also include computer-assisted landscape architecture, design of new varieties of plants, and crop yield prediction. In this paper we revisit foundations of the applications of L-systems to the modeling of plants, and we illustrate them using recently developed sample models.",
"title": ""
},
{
"docid": "9e5c123b6f744037436e0d5c917e8640",
"text": "Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DATAHUB, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale.",
"title": ""
},
{
"docid": "ea5697d417fe154be77d941c19d8a86e",
"text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "19d8b6ff70581307e0a00c03b059964f",
"text": "We propose a novel approach for analysing time series using complex network theory. We identify the recurrence matrix (calculated from time series) with the adjacency matrix of a complex network and apply measures for the characterisation of complex networks to this recurrence matrix. By using the logistic map, we illustrate the potential of these complex network measures for the detection of dynamical transitions. Finally, we apply the proposed approach to a marine palaeo-climate record and identify the subtle changes to the climate regime.",
"title": ""
},
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
}
] |
scidocsrr
|
7405964a85c0b239ba7e1c7f80564e15
|
A Kernel Fuzzy c-Means Clustering-Based Fuzzy Support Vector Machine Algorithm for Classification Problems With Outliers or Noises
|
[
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
}
] |
[
{
"docid": "7fd33ebd4fec434dba53b15d741fdee4",
"text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.",
"title": ""
},
{
"docid": "1cc586730cf0c1fd57cf6ff7548abe24",
"text": "Researchers have proposed various methods to extract 3D keypoints from the surface of 3D mesh models over the last decades, but most of them are based on geometric methods, which lack enough flexibility to meet the requirements for various applications. In this paper, we propose a new method on the basis of deep learning by formulating the 3D keypoint detection as a regression problem using deep neural network (DNN) with sparse autoencoder (SAE) as our regression model. Both local information and global information of a 3D mesh model in multi-scale space are fully utilized to detect whether a vertex is a keypoint or not. SAE can effectively extract the internal structure of these two kinds of information and formulate highlevel features for them, which is beneficial to the regression model. Three SAEs are used to formulate the hidden layers of the DNN and then a logistic regression layer is trained to process the high-level features produced in the third SAE. Numerical experiments show that the proposed DNN based 3D keypoint detection algorithm outperforms current five state-of-the-art methods for various 3D mesh models.",
"title": ""
},
{
"docid": "c8be0e643c72c7abea1ad758ac2b49a8",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "96bddddd86976f4dff0b984ef062704b",
"text": "How do the structures of the medial temporal lobe contribute to memory? To address this question, we examine the neurophysiological correlates of both recognition and associative memory in the medial temporal lobe of humans, monkeys, and rats. These cross-species comparisons show that the patterns of mnemonic activity observed throughout the medial temporal lobe are largely conserved across species. Moreover, these findings show that neurons in each of the medial temporal lobe areas can perform both similar as well as distinctive mnemonic functions. In some cases, similar patterns of mnemonic activity are observed across all structures of the medial temporal lobe. In the majority of cases, however, the hippocampal formation and surrounding cortex signal mnemonic information in distinct, but complementary ways.",
"title": ""
},
{
"docid": "efd6856e774b258858c43d7746639317",
"text": "In this paper, we propose a vision-based robust vehicle distance estimation algorithm that supports motorists to rapidly perceive relative distance of oncoming and passing vehicles thereby minimizing the risk of hazardous circumstances. And, as it is expected, the silhouettes of background stationary objects may appear in the motion scene, which pop-up due to motion of the camera, which is mounted on dashboard of the host vehicle. To avoid the effect of false positive detection of stationary objects and to determine the ego motion a new Morphological Strip Matching Algorithm and Recursive Stencil Mapping Algorithm(MSM-RSMA)is proposed. A new series of stencils are created where non-stationary objects are taken off after detecting stationary objects by applying a shape matching technique to each image strip pair. Then the vertical shift is estimated recursively with new stencils with identified stationary background objects. Finally, relative comparison of known templates are used to estimate the distance, which is further certified by value obtained for vertical shift. We apply analysis of relative dimensions of bounding box of the detected vehicle with relevant templates to calculate the relative distance. We prove that our method is capable of providing a comparatively fast distance estimation while keeping its robustness in different environments changes.",
"title": ""
},
{
"docid": "01472364545392cad69b9c7e1f65f4bb",
"text": "The designing of power transmission network is a difficult task due to the complexity of power system. Due to complexity in the power system there is always a loss of the stability due to the fault. Whenever a fault is intercepted in system, the whole system goes to severe transients. These transients cause oscillation in phase angle which leads poor power quality. The nature of oscillation is increasing instead being sustained, which leads system failure in form of generator damage. To reduce and eliminate the unstable oscillations one needs to use a stabilizer which can generate a perfect compensatory signal in order to minimize the harmonics generated due to instability. This paper presents a Power System stabilizer to reduce oscillations due to small signal disturbance. Additionally, a hybrid approach is proposed using FOPID stabilizer with the PSS connected SMIB. Genetic algorithm (GA), Particle swarm optimization (PSO) and Grey Wolf Optimization (GWO) are used for the parameter tuning of the stabilizer. Reason behind the use of GA, PSO and GWO instead of conventional methods is that it search the parameter heuristically, which leads better results. The efficiency of proposed approach is observed by rotor angle and power angle deviations in the SMIB system.",
"title": ""
},
{
"docid": "e7519a25915e5bb5359d0365513cad40",
"text": "Statistical and machine learning algorithms are increasingly used to inform decisions that have large impacts on individuals’ lives. Examples include hiring [8], predictive policing [13], pre-trial risk assessment of recidivism[6, 2], and risk of violence while incarcerated [5]. In many of these cases, the outcome variable to which the predictive models are trained is observed with bias with respect to some legally protected classes. For example, police records do not constitute a representative sample of all crimes [12]. In particular, black drug users are arrested at a rate that is several times that of white drug users despite the fact that black and white populations are estimated by public health officials to use drugs at roughly the same rate [11]. Algorithms trained on such data will produce predictions that are biased against groups that are disproportionately represented in the training data. Several approaches have been proposed to correct unfair predictive models. The simplest approach is to exclude the protected variable(s) from the analysis, under the belief that doing so will result in “race-neutral” predictions [14]. Of course, simply excluding a protected variable is insufficient to avoid discriminatory predictions, as any included variables that are correlated with the protected variables still contain information about the protected characteristic. In the case of linear models, this phenomenon is well-known, and is referred to as omitted variable bias [4]. Another approach that has been proposed in the computer science literature is to remove information about the protected variables from the set of covariates to be used in predictive models [7, 3]. A third alternative is to modify the outcome variable. For example, [9] use a naive Bayes classifier to rank each observation and perturb the outcome such that predictions produced by the algorithm are independent of the protected variable. A discussion of several more algorithms for binary protected and outcome variables can be found in [10]. The approach we propose is most similar to [7], though we approach the problem from a statistical modeling perspective. We define a procedure consisting of a chain of conditional models. Within this framework, both protecting and adjusting variables of arbitrary type becomes natural. Whereas previous work has been limited to protecting only binary or categorical variables and adjusting a limited number of covariates, our proposed framework allows for an arbitrary number of variables",
"title": ""
},
{
"docid": "3ca7b7b8e07eb5943d6ce2acf9a6fa82",
"text": "Excessive heat generation and occurrence of partial discharge have been observed in end-turn stress grading (SG) system in form-wound machines under PWM voltage. In this paper, multi-winding stress grading (SG) system is proposed as a method to change resistance of SG per length. Although the maximum field at the edge of stator and CAT are in a trade-off relationship, analytical results suggest that we can suppress field and excessive heat generation at both stator and CAT edges by multi-winding of SG and setting the length of CAT appropriately. This is also experimentally confirmed by measuring potential distribution of model bar-coil and observing partial discharge and temperature rise.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "159222cde67c2d08e0bde7996b422cd6",
"text": "Superficial thrombophlebitis of the dorsal vein of the penis, known as penile Mondor’s disease, is an uncommon genital disease. We report on a healthy 44-year-old man who presented with painful penile swelling, ecchymosis, and penile deviation after masturbation, which initially imitated a penile fracture. Thrombosis of the superficial dorsal vein of the penis without rupture of corpus cavernosum was found during surgical exploration. The patient recovered without erectile dysfunction.",
"title": ""
},
{
"docid": "1f05175a0dce51dcd7a1527dce2f1286",
"text": "The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world powerlaw graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and blockcentric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-ofthe-art distributed graph computing systems.",
"title": ""
},
{
"docid": "d761b2718cfcabe37b72768962492844",
"text": "In the most recent years, wireless communication networks have been facing a rapidly increasing demand for mobile traffic along with the evolvement of applications that require data rates of several 10s of Gbit/s. In order to enable the transmission of such high data rates, two approaches are possible in principle. The first one is aiming at systems operating with moderate bandwidths at 60 GHz, for example, where 7 GHz spectrum is dedicated to mobile services worldwide. However, in order to reach the targeted date rates, systems with high spectral efficiencies beyond 10 bit/s/Hz have to be developed, which will be very challenging. A second approach adopts moderate spectral efficiencies and requires ultra high bandwidths beyond 20 GHz. Such an amount of unregulated spectrum can be identified only in the THz frequency range, i.e. beyond 300 GHz. Systems operated at those frequencies are referred to as THz communication systems. The technology enabling small integrated transceivers with highly directive, steerable antennas becomes the key challenges at THz frequencies in face of the very high path losses. This paper gives an overview over THz communications, summarizing current research projects, spectrum regulations and ongoing standardization activities.",
"title": ""
},
{
"docid": "24fab96f67040ed6ac13ab0696b9421c",
"text": "In the past decade, resting-state functional MRI (R-fMRI) measures of brain activity have attracted considerable attention. Based on changes in the blood oxygen level-dependent signal, R-fMRI offers a novel way to assess the brain's spontaneous or intrinsic (i.e., task-free) activity with both high spatial and temporal resolutions. The properties of both the intra- and inter-regional connectivity of resting-state brain activity have been well documented, promoting our understanding of the brain as a complex network. Specifically, the topological organization of brain networks has been recently studied with graph theory. In this review, we will summarize the recent advances in graph-based brain network analyses of R-fMRI signals, both in typical and atypical populations. Application of these approaches to R-fMRI data has demonstrated non-trivial topological properties of functional networks in the human brain. Among these is the knowledge that the brain's intrinsic activity is organized as a small-world, highly efficient network, with significant modularity and highly connected hub regions. These network properties have also been found to change throughout normal development, aging, and in various pathological conditions. The literature reviewed here suggests that graph-based network analyses are capable of uncovering system-level changes associated with different processes in the resting brain, which could provide novel insights into the understanding of the underlying physiological mechanisms of brain function. We also highlight several potential research topics in the future.",
"title": ""
},
{
"docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2",
"text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.",
"title": ""
},
{
"docid": "2fa3e2a710cc124da80941545fbdffa4",
"text": "INTRODUCTION\nThe use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear.\n\n\nMETHODS\nWe reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear.\n\n\nRESULTS\nThe intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001).\n\n\nDISCUSSION\nOur findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.",
"title": ""
},
{
"docid": "6f77e74cd8667b270fae0ccc673b49a5",
"text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.",
"title": ""
},
{
"docid": "569f8890a294b69d688977fc235aef17",
"text": "Traditionally, voice communication over the local loop has been provided by wired systems. In particular, twisted pair has been the standard means of connection for homes and offices for several years. However in the recent past there has been an increased interest in the use of radio access technologies in local loops. Such systems which are now popular for their ease and low cost of installation and maintenance are called Wireless in Local Loop (WLL) systems. Subscribers' demands for greater capacity has grown over the years especially with the advent of the Internet. Wired local loops have responded to these increasing demands through the use of digital technologies such as ISDN and xDSL. Demands for enhanced data rates are being faced by WLL system operators too, thus entailing efforts towards more efficient bandwidth use. Multi-hop communication has already been studied extensively in Ad hoc network environments and has begun making forays into cellular systems as well. Multi-hop communication has been proven as one of the best ways to enhance throughput in a wireless network. Through this effort we study the issues involved in multi-hop communication in a wireless local loop system and propose a novel WLL architecture called Throughput enhanced Wireless in Local Loop (TWiLL). Through a realistic simulation model we show the tremendous performance improvement achieved by TWiLL over WLL. Traditional pricing schemes employed in single hop wireless networks cannot be applied in TWiLL -- a multi-hop environment. We also propose three novel cost reimbursement based pricing schemes which could be applied in such a multi-hop environment.",
"title": ""
},
{
"docid": "81f9a52b6834095cd7be70b39af0e7f0",
"text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.",
"title": ""
},
{
"docid": "1bfab561c8391dad6f0493fa7614feba",
"text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in",
"title": ""
},
{
"docid": "6fc6167d1ef6b96d239fea03b9653865",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
}
] |
scidocsrr
|
763338ac575cee16828202cf29effc84
|
Dominant Color Embedded Markov Chain Model for Object Image Retrieval
|
[
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
}
] |
[
{
"docid": "733b998017da30fe24521158a6aaa749",
"text": "Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing.",
"title": ""
},
{
"docid": "e51f7fde238b0896df22d196b8c59c1a",
"text": "The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions such as the grey-world and white patch assumptions. In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions found in images. To this end, images are first classified into stages (rough 3D geometry models). According to the stage models, images are divided into different regions using hard and soft segmentation. After that, the best color constancy algorithm is selected for each geometry segment. As a result, light source estimation is tuned to the global scene geometry. Our algorithm opens the possibility to estimate the remote scene illumination color, by distinguishing nearby light source from distant illuminants. Experiments on large scale image datasets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 14% of median angular error. When using an ideal classifier (i.e, all of the test images are correctly classified into stages), the performance of the proposed method achieves an improvement of 31% of median angular error compared to the best-performing single color constancy algorithm.",
"title": ""
},
{
"docid": "1cbac59380ee798a621d58a6de35361f",
"text": "With the fast development of modern power semiconductors in the last years, the development of current measurement technologies has to adapt to this evolution. The challenge for the power electronic engineer is to provide a current sensor with a high bandwidth and a high immunity against external interferences. Rogowski current transducers are popular for monitoring transient currents in power electronic applications without interferences caused by external magnetic fields. But the trend of even higher current and voltage gradients generates a dilemma regarding the Rogowski current transducer technology. On the one hand, a high current gradient requires a current sensor with a high bandwidth. On the other hand, high voltage gradients forces to use a shielding around the Rogowski coil in order to protect the measurement signal from a capacitive displacement current caused by an unavoidable capacitive coupling to the setup, which reduces the bandwidth substantially. This paper presents a new Rogowski coil design which allows to measure high current gradients close to high voltage gradients without interferences and without reducing the bandwidth by a shielding. With this new measurement technique, it is possible to solve the mentioned dilemma and to get ready to measure the current of modern power semiconductors such as SiC and GaN with a Rogowski current transducer.",
"title": ""
},
{
"docid": "1d8765a407f2b9f8728982f54ddb6ae1",
"text": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. Materials and Methods: The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. Results: We evaluate similar medical concepts across diagnosis, medication and procedure. The results show xx% relevancy between similar pairs of medical concepts. Our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. Conclusion: We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance.",
"title": ""
},
{
"docid": "d0e8265bf57729b74375c9b476c4b028",
"text": "As experts in the health care of children and adolescents, pediatricians may be called on to advise legislators concerning the potential impact of changes in the legal status of marijuana on adolescents. Parents, too, may look to pediatricians for advice as they consider whether to support state-level initiatives that propose to legalize the use of marijuana for medical purposes or to decriminalize possession of small amounts of marijuana. This policy statement provides the position of the American Academy of Pediatrics on the issue of marijuana legalization, and the accompanying technical report (available online) reviews what is currently known about the relationship between adolescents' use of marijuana and its legal status to better understand how change might influence the degree of marijuana use by adolescents in the future.",
"title": ""
},
{
"docid": "776b1f07dfd93ff78e97a6a90731a15b",
"text": "Precise destination prediction of taxi trajectories can benefit many intelligent location based services such as accurate ad for passengers. Traditional prediction approaches, which treat trajectories as one-dimensional sequences and process them in single scale, fail to capture the diverse two-dimensional patterns of trajectories in different spatial scales. In this paper, we propose T-CONV which models trajectories as two-dimensional images, and adopts multi-layer convolutional neural networks to combine multi-scale trajectory patterns to achieve precise prediction. Furthermore, we conduct gradient analysis to visualize the multi-scale spatial patterns captured by T-CONV and extract the areas with distinct influence on the ultimate prediction. Finally, we integrate multiple local enhancement convolutional fields to explore these important areas deeply for better prediction. Comprehensive experiments based on real trajectory data show that T-CONV can achieve higher accuracy than the state-of-the-art methods.",
"title": ""
},
{
"docid": "1057ed913b857d0b22f5c535f919d035",
"text": "The purpose of this series is to convey the principles governing our aesthetic senses. Usually meaning visual perception, aesthetics is not merely limited to the ocular apparatus. The concept of aesthetics encompasses both the time-arts such as music, theatre, literature and film, as well as space-arts such as paintings, sculpture and architecture.",
"title": ""
},
{
"docid": "c4ad78f8d997fbbca0f376557276218c",
"text": "To coupe with the difficulties in the process of inspection and classification of defects in Printed Circuit Board (PCB), other researchers have proposed many methods. However, few of them published their dataset before, which hindered the introduction and comparison of new methods. In this paper, we published a synthesized PCB dataset containing 1386 images with 6 kinds of defects for the use of detection, classification and registration tasks. Besides, we proposed a reference based method to inspect and trained an end-to-end convolutional neural network to classify the defects. Unlike conventional approaches that require pixel-by-pixel processing, our method firstly locate the defects and then classify them by neural networks, which shows superior performance on our dataset.",
"title": ""
},
{
"docid": "e9d42505aebdcd2307852cf13957d407",
"text": "We report a broadband polarization-independent perfect absorber with wide-angle near unity absorbance in the visible regime. Our structure is composed of an array of thin Au squares separated from a continuous Au film by a phase change material (Ge2Sb2Te5) layer. It shows that the near perfect absorbance is flat and broad over a wide-angle incidence up to 80° for either transverse electric or magnetic polarization due to a high imaginary part of the dielectric permittivity of Ge2Sb2Te5. The electric field, magnetic field and current distributions in the absorber are investigated to explain the physical origin of the absorbance. Moreover, we carried out numerical simulations to investigate the temporal variation of temperature in the Ge2Sb2Te5 layer and to show that the temperature of amorphous Ge2Sb2Te5 can be raised from room temperature to > 433 K (amorphous-to-crystalline phase transition temperature) in just 0.37 ns with a low light intensity of 95 nW/μm(2), owing to the enhanced broadband light absorbance through strong plasmonic resonances in the absorber. The proposed phase-change metamaterial provides a simple way to realize a broadband perfect absorber in the visible and near-infrared (NIR) regions and is important for a number of applications including thermally controlled photonic devices, solar energy conversion and optical data storage.",
"title": ""
},
{
"docid": "772b3f74b6eecf82099b2e5b3709e507",
"text": "A common prerequisite for many vision-based driver assistance systems is the knowledge of the vehicle's own movement. In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images. Our method is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. The only assumption we make is a known camera geometry, where the calibration may also vary over time. We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments. A high-accuracy inertial navigation system is used to evaluate our results on challenging real-world video sequences. Experiments show that our approach is clearly superior compared to other filtering techniques in terms of both, accuracy and run-time.",
"title": ""
},
{
"docid": "dc91774abd58e19066a110bbff9fa306",
"text": "Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip.",
"title": ""
},
{
"docid": "f0f7bd0223d69184f3391aaf790a984d",
"text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.",
"title": ""
},
{
"docid": "e462c0cfc1af657cb012850de1b7b717",
"text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.",
"title": ""
},
{
"docid": "b0989fb1775c486317b5128bc1c31c76",
"text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.",
"title": ""
},
{
"docid": "ade3f3c778cf29e7c03bf96196916d6d",
"text": "Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.",
"title": ""
},
{
"docid": "86bbaffa7e9a58c06d695443224cbf01",
"text": "Movie studios often have to choose among thousands of scripts to decide which ones to turn into movies. Despite the huge amount of money at stake, this process, known as “green-lighting” in the movie industry, is largely a guesswork based on experts’ experience and intuitions. In this paper, we propose a new approach to help studios evaluate scripts which will then lead to more profitable green-lighting decisions. Our approach combines screenwriting domain knowledge, natural language processing techniques, and statistical learning methods to forecast a movie’s return-on-investment based only on textual information available in movie scripts. We test our model in a holdout decision task to show that our model is able to improve a studio’s gross return-on-investment significantly.",
"title": ""
},
{
"docid": "d5bc3147e23f95a070bce0f37a96c2a8",
"text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.",
"title": ""
},
{
"docid": "e36e318dd134fd5840d5a5340eb6e265",
"text": "Business Intelligence (BI) promises a range of technologies for using information to ensure compliance to strategic and tactical objectives, as well as government laws and regulations. These technologies can be used in conjunction with conceptual models of business objectives, processes and situations (aka business schemas) to drive strategic decision-making about opportunities and threats etc. This paper focuses on three key concepts for strategic business models -situation, influence and indicator -and how they are used for strategic analysis. The semantics of these concepts are defined using a state-ofthe-art upper ontology (DOLCE+). We also propose a method for building a business schema, and demonstrate alternative ways of formal analysis of the schema based on existing tools for goal and probabilistic reasoning.",
"title": ""
},
{
"docid": "8d99f6fd95fb329e16294b7884090029",
"text": "The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.",
"title": ""
}
] |
scidocsrr
|
a7ce59adc981813107323821e694c2f8
|
A Bistatic SAR Raw Data Simulator Based on Inverse $ \omega{-}k$ Algorithm
|
[
{
"docid": "b3e1bdd7cfca17782bde698297e191ab",
"text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.",
"title": ""
}
] |
[
{
"docid": "8bc095fca33d850db89ffd15a84335dc",
"text": "There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.",
"title": ""
},
{
"docid": "b77d297feeff92a2e7b03bf89b5f20db",
"text": "Dependability evaluation main objective is to assess the ability of a system to correctly function over time. There are many possible approaches to the evaluation of dependability: in these notes we are mainly concerned with dependability evaluation based on probabilistic models. Starting from simple probabilistic models with very efficient solution methods we shall then come to the main topic of the paper: how Petri nets can be used to evaluate the dependability of complex systems.",
"title": ""
},
{
"docid": "3182542aa5b500780bb8847178b8ec8d",
"text": "The United States is a diverse country with constantly changing demographics. The noticeable shift in demographics is even more phenomenal among the school-aged population. The increase of ethnic-minority student presence is largely credited to the national growth of the Hispanic population, which exceeded the growth of all other ethnic minority group students in public schools. Scholars have pondered over strategies to assist teachers in teaching about diversity (multiculturalism, racism, etc.) as well as interacting with the diversity found within their classrooms in order to ameliorate the effects of cultural discontinuity. One area that has developed in multicultural education literature is culturally relevant pedagogy (CRP). CRP maintains that teachers need to be non-judgmental and inclusive of the cultural backgrounds of their students in order to be effective facilitators of learning in the classroom. The plethora of literature on CRP, however, has not been presented as a testable theoretical model nor has it been systematically viewed through the lens of critical race theory (CRT). By examining the evolution of CRP among some of the leading scholars, the authors broaden this work through a CRT infusion which includes race and indeed racism as normal parts of American society that have been integrated into the educational system and the systematic aspects of school relationships. Their purpose is to infuse the tenets of CRT into an overview of the literature that supports a conceptual framework for understanding and studying culturally relevant pedagogy. They present a conceptual framework of culturally relevant pedagogy that is grounded in over a quarter of a century of research scholarship. By synthesizing the literature into the five areas and infusing it with the tenets of CRT, the authors have developed a collection of principles that represents culturally relevant pedagogy. (Contains 1 figure and 1 note.) culturally relevant pedagogy | teacher education | student-teacher relationships |",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "164fd7be21190314a27bacb4dec522c5",
"text": "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?",
"title": ""
},
{
"docid": "28439c317c1b7f94527db6c2e0edcbd0",
"text": "AnswerBus1 is an open-domain question answering system based on sentence level Web information retrieval. It accepts users’ natural-language questions in English, German, French, Spanish, Italian and Portuguese and provides answers in English. Five search engines and directories are used to retrieve Web pages that are relevant to user questions. From the Web pages, AnswerBus extracts sentences that are determined to contain answers. Its current rate of correct answers to TREC-8’s 200 questions is 70.5% with the average response time to the questions being seven seconds. The performance of AnswerBus in terms of accuracy and response time is better than other similar systems.",
"title": ""
},
{
"docid": "933073c108baa0229c8bcd423ceade47",
"text": "Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.",
"title": ""
},
{
"docid": "b7673dbe46a1118511d811241940e328",
"text": "A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation=\"LaTeX\">$102\\,\\,\\mu {\\mathrm{ m}} \\times 95\\,\\,\\mu {\\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified.",
"title": ""
},
{
"docid": "216c1f8d96e8392fe05e51f556caf2ef",
"text": "The Hypogonadism in Males study estimated the prevalence of hypogonadism [total testosterone (TT) < 300 ng/dl] in men aged > or = 45 years visiting primary care practices in the United States. A blood sample was obtained between 8 am and noon and assayed for TT, free testosterone (FT) and bioavailable testosterone (BAT). Common symptoms of hypogonadism, comorbid conditions, demographics and reason for visit were recorded. Of 2162 patients, 836 were hypogonadal, with 80 receiving testosterone. Crude prevalence rate of hypogonadism was 38.7%. Similar trends were observed for FT and BAT. Among men not receiving testosterone, 756 (36.3%) were hypogonadal; odds ratios for having hypogonadism were significantly higher in men with hypertension (1.84), hyperlipidaemia (1.47), diabetes (2.09), obesity (2.38), prostate disease (1.29) and asthma or chronic obstructive pulmonary disease (1.40) than in men without these conditions. The prevalence of hypogonadism was 38.7% in men aged > or = 45 years presenting to primary care offices.",
"title": ""
},
{
"docid": "ac76a4fe36e95d87f844c6876735b82f",
"text": "Theoretical estimates indicate that graphene thin films can be used as transparent electrodes for thin-film devices such as solar cells and organic light-emitting diodes, with an unmatched combination of sheet resistance and transparency. We demonstrate organic light-emitting diodes with solution-processed graphene thin film transparent conductive anodes. The graphene electrodes were deposited on quartz substrates by spin-coating of an aqueous dispersion of functionalized graphene, followed by a vacuum anneal step to reduce the sheet resistance. Small molecular weight organic materials and a metal cathode were directly deposited on the graphene anodes, resulting in devices with a performance comparable to control devices on indium-tin-oxide transparent anodes. The outcoupling efficiency of devices on graphene and indium-tin-oxide is nearly identical, in agreement with model predictions.",
"title": ""
},
{
"docid": "1ccc1b904fa58b1e31f4f3f4e2d76707",
"text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.",
"title": ""
},
{
"docid": "14aefcc95313cecbce5f575fd78a9ae5",
"text": "The Penn Treebank does not annotate within base noun phrases (NPs), committing only to flat structures that ignore the complexity of English NPs. This means that tools trained on Treebank data cannot learn the correct internal structure of NPs. This paper details the process of adding gold-standard bracketing within each noun phrase in the Penn Treebank. We then examine the consistency and reliability of our annotations. Finally, we use this resource to determine NP structure using several statistical approaches, thus demonstrating the utility of the corpus. This adds detail to the Penn Treebank that is necessary for many NLP applications.",
"title": ""
},
{
"docid": "2c63b16ba725f8941f2f9880530911ef",
"text": "To facilitate wireless transmission of multimedia content to mobile users, we propose a content caching and distribution framework for smart grid enabled OFDM networks, where each popular multimedia file is coded and distributively stored in multiple energy harvesting enabled serving nodes (SNs), and the green energy distributively harvested by SNs can be shared with each other through the smart grid. The distributive caching, green energy sharing, and the on-grid energy backup have improved the reliability and performance of the wireless multimedia downloading process. To minimize the total on-grid power consumption of the whole network, while guaranteeing that each user can retrieve the whole content, the user association scheme is jointly designed with consideration of resource allocation, including subchannel assignment, power allocation, and power flow among nodes. Simulation results demonstrate that bringing content, green energy, and SN closer to the end user can notably reduce the on-grid energy consumption.",
"title": ""
},
{
"docid": "f4c1a8b19248e0cb8e2791210715e7b7",
"text": "The translation of proper names is one of the most challenging activities every translator faces. While working on children’s literature, the translation is especially complicated since proper names usually have various allusions indicating sex, age, geographical belonging, history, specific meaning, playfulness of language and cultural connotations. The goal of this article is to draw attention to strategic choices for the translation of proper names in children’s literature. First, the article presents the theoretical considerations that deal with different aspects of proper names in literary works and the issue of their translation. Second, the translation strategies provided by the translation theorist Eirlys E. Davies used for this research are explained. In addition, the principles of adaptation of proper names provided the State Commission of the Lithuanian Language are presented. Then, the discussion proceeds to the quantitative analysis of the translated proper names with an emphasis on providing and explaining numerous examples. The research has been carried out on four popular fantasy books translated from English and German by three Lithuanian translators. After analyzing the strategies of preservation, localization, transformation and creation, the strategy of localization has proved to be the most frequent one in all translations.",
"title": ""
},
{
"docid": "0170bcdc662628fb46142e62bc8e011d",
"text": "Agriculture is the sole provider of human food. Most farm machines are driven by fossil fuels, which contribute to greenhouse gas emissions and, in turn, accelerate climate change. Such environmental damage can be mitigated by the promotion of renewable resources such as solar, wind, biomass, tidal, geo-thermal, small-scale hydro, biofuels and wave-generated power. These renewable resources have a huge potential for the agriculture industry. The farmers should be encouraged by subsidies to use renewable energy technology. The concept of sustainable agriculture lies on a delicate balance of maximizing crop productivity and maintaining economic stability, while minimizing the utilization of finite natural resources and detrimental environmental impacts. Sustainable agriculture also depends on replenishing the soil while minimizing the use of non-renewable resources, such as natural gas, which is used in converting atmospheric nitrogen into synthetic fertilizer, and mineral ores, e.g. phosphate or fossil fuel used in diesel generators for water pumping for irrigation. Hence, there is a need for promoting use of renewable energy systems for sustainable agriculture, e.g. solar photovoltaic water pumps and electricity, greenhouse technologies, solar dryers for post-harvest processing, and solar hot water heaters. In remote agricultural lands, the underground submersible solar photovoltaic water pump is economically viable and also an environmentally-friendly option as compared with a diesel generator set. If there are adverse climatic conditions for the growth of particular plants in cold climatic zones then there is need for renewable energy technology such as greenhouses for maintaining the optimum plant ambient temperature conditions for the growth of plants and vegetables. The economics of using greenhouses for plants and vegetables, and solar photovoltaic water pumps for sustainable agriculture and the environment are presented in this article. Clean development provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at the lowest cost. The mechanism of clean development is discussed in brief for the use of renewable systems for sustainable agricultural development specific to solar photovoltaic water pumps in India and the world. This article explains in detail the role of renewable energy in farming by connecting all aspects of agronomy with ecology, the environment, economics and societal change.",
"title": ""
},
{
"docid": "afcfe379acfd727b6044c70478b3c2a3",
"text": "We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.",
"title": ""
},
{
"docid": "0d1f9b3fa3d03b37438024ba354ca68a",
"text": "Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-theart results on a recent context-dependent semantic parsing task.",
"title": ""
},
{
"docid": "c85e5745141e64e224a5c4c61f1b1866",
"text": "Crowd-sourcing has become a popular means of acquiring labeled data for many tasks where humans are more accurate than computers, such as image tagging, entity resolution, or sentiment analysis. However, due to the time and cost of human labor, solutions that solely rely on crowd-sourcing are often limited to small datasets (i.e., a few thousand items). This paper proposes algorithms for integrating machine learning into crowd-sourced databases in order to combine the accuracy of human labeling with the speed and cost-effectiveness of machine learning classifiers. By using active learning as our optimization strategy for labeling tasks in crowdsourced databases, we can minimize the number of questions asked to the crowd, allowing crowd-sourced applications to scale (i.e, label much larger datasets at lower costs). Designing active learning algorithms for a crowd-sourced database poses many practical challenges: such algorithms need to be generic, scalable, and easy-to-use for a broad range of practitioners, even those who are not machine learning experts. We draw on the theory of nonparametric bootstrap to design, to the best of our knowledge, the first active learning algorithms that meet all these requirements. Our results, on 3 real-world datasets collected with Amazon’s Mechanical Turk, and on 15 UCI datasets, show that our methods on average ask 1–2 orders of magnitude fewer questions than the baseline, and 4.5–44× fewer than existing active learning algorithms.",
"title": ""
},
{
"docid": "c4e6176193677f62f6b33dc02580c7f2",
"text": "E-learning has become an essential factor in the modern educational system. In today's diverse student population, E-learning must recognize the differences in student personalities to make the learning process more personalized. The objective of this study is to create a data model to identify both the student personality type and the dominant preference based on the Myers-Briggs Type Indicator (MBTI) theory. The proposed model utilizes data from student engagement with the learning management system (Moodle) and the social network, Facebook. The model helps students become aware of their personality, which in turn makes them more efficient in their study habits. The model also provides vital information for educators, equipping them with a better understanding of each student's personality. With this knowledge, educators will be more capable of matching students with their respective learning styles. The proposed model was applied on a sample data collected from the Business College at the German university in Cairo, Egypt (240 students). The model was tested using 10 data mining classification algorithms which were NaiveBayes, BayesNet, Kstar, Random forest, J48, OneR, JRIP, KNN /IBK, RandomTree, Decision Table. The results showed that OneR had the best accuracy percentage of 97.40%, followed by Random forest 93.23% and J48 92.19%.",
"title": ""
}
] |
scidocsrr
|
381a180ecd74e87262ec5c5be0ccbe97
|
Facial Action Coding System
|
[
{
"docid": "6b6285cd8512a2376ae331fda3fedf20",
"text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.",
"title": ""
}
] |
[
{
"docid": "a65d1881f5869f35844064d38b684ac8",
"text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.",
"title": ""
},
{
"docid": "8fc758632346ce45e8f984018cde5ece",
"text": "Today Recommendation systems [3] have become indispensible because of the sheer bulk of information made available to a user from web-services(Netflix, IMDB, Amazon and many others) and the need for personalized suggestions. Recommendation systems are a well studied research area. In the following work, we present our study on the Netflix Challenge [1]. The Neflix Challenge can be summarized in the following way: ”Given a movie, predict the rating of a particular user based on the user’s prior ratings”. The performance of all such approaches is measured using the RMSE (root mean-squared error) of the submitted ratings from the actual ratings. Currently, the best system has an RMSE of 0.8616 [2]. We obtained ratings from the following approaches:",
"title": ""
},
{
"docid": "c197198ca45acec2575d5be26fc61f36",
"text": "General systems theory has been proposed as a basis for the unification of science. The open systems model has stimulated many new conceptualizations in organization theory and management practice. However, experience in utilizing these concepts suggests many unresolved dilemmas. Contingency views represent a step toward less abstraction, more explicit patterns of relationships, and more applicable theory. Sophistication will come when we have a more complete understanding of organizations as total systems (configurations of subsystems) so that we can prescribe more appropriate organizational designs and managerial systems. Ultimately, organization theory should serve as the foundation for more effective management practice.",
"title": ""
},
{
"docid": "12eff845ccb6e5cc2b2fbe74935aff46",
"text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.",
"title": ""
},
{
"docid": "5f20ed750fc260f40d01e8ac5ddb633d",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii CHAPTER",
"title": ""
},
{
"docid": "f1cfd3980bb7dc78309074012be3cf03",
"text": "A chatbot is a conversational agent that interacts with users using natural language. Multi chatbots are available to serve in different domains. However, the knowledge base of chatbots is hand coded in its brain. This paper presents an overview of ALICE chatbot, its AIML format, and our experiments to generate different prototypes of ALICE automatically based on a corpus approach. A description of developed software which converts readable text (corpus) into AIML format is presented alongside with describing the different corpora we used. Our trials revealed the possibility of generating useful prototypes without the need for sophisticated natural language processing or complex machine learning techniques. These prototypes were used as tools to practice different languages, to visualize corpus, and to provide answers for questions.",
"title": ""
},
{
"docid": "22ad4568fbf424592c24783fb3037f62",
"text": "We propose an unsupervised learning technique for extracting information about authors and topics from large text collections. We model documents as if they were generated by a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words. The probability distribution over topics in a multi-author paper is a mixture of the distributions associated with the authors. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to three large text corpora: 150,000 abstracts from the CiteSeer digital library, 1740 papers from the Neural Information Processing Systems (NIPS) Conferences, and 121,000 emails from the Enron corporation. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, parsing of abstracts by topics and authors, and detection of unusual papers by specific authors. Experiments based on perplexity scores for test documents and precision-recall for document retrieval are used to illustrate systematic differences between the proposed author-topic model and a number of alternatives. Extensions to the model, allowing for example, generalizations of the notion of an author, are also briefly discussed.",
"title": ""
},
{
"docid": "34bfec0f1f7eb748b3632bbf288be3bd",
"text": "An omnidirectional mobile robot is able, kinematically, to move in any direction regardless of current pose. To date, nearly all designs and analyses of omnidirectional mobile robots have considered the case of motion on flat, smooth terrain. In this paper, an investigation of the design and control of an omnidirectional mobile robot for use in rough terrain is presented. Kinematic and geometric properties of the active split offset caster drive mechanism are investigated along with system and subsystem design guidelines. An optimization method is implemented to explore the design space. The use of this method results in a robot that has higher mobility than a robot designed using engineering judgment. A simple kinematic controller that considers the effects of terrain unevenness via an estimate of the wheel-terrain contact angles is also presented. It is shown in simulation that under the proposed control method, near-omnidirectional tracking performance is possible even in rough, uneven terrain. DOI: 10.1115/1.4000214",
"title": ""
},
{
"docid": "e364db9141c85b1f260eb3a9c1d42c5b",
"text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557",
"title": ""
},
{
"docid": "abdffec5ea2b05b61006cc7b6b295976",
"text": "Making recommendation requires predicting what is of interest to a user at a specific time. Even the same user may have different desires at different times. It is important to extract the aggregate interest of a user from his or her navigational path through the site in a session. This paper concentrates on the discovery and modelling of the user’s aggregate interest in a session. This approach relies on the premise that the visiting time of a page is an indicator of the user’s interest in that page. The proportion of times spent in a set of pages requested by the user within a single session forms the aggregate interest of that user in that session. We first partition user sessions into clusters such that only sessions which represent similar aggregate interest of users are placed in the same cluster. We employ a model-based clustering approach and partition user sessions according to similar amount of time in similar pages. In particular, we cluster sessions by learning a mixture of Poisson models using Expectation Maximization algorithm. The resulting clusters are then used to recommend pages to a user that are most likely contain the information which is of interest to that user at that time. Although the approach does not use the sequential patterns of transactions, experimental evaluation shows that the approach is quite effective in capturing a Web user’s access pattern. The model has an advantage over previous proposals in terms of speed and memory usage.",
"title": ""
},
{
"docid": "53b48550158b06dfbdb8c44a4f7241c6",
"text": "The primary aim of the study was to examine the relationship between media exposure and body image in adolescent girls, with a particular focus on the ‘new’ and as yet unstudied medium of the Internet. A sample of 156 Australian female high school students (mean age= 14.9 years) completed questionnaire measures of media consumption and body image. Internet appearance exposure and magazine reading, but not television exposure, were found to be correlated with greater internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. Regression analyses indicated that the effects of magazines and Internet exposure were mediated by internalization and appearance comparison. It was concluded that the Internet represents a powerful sociocultural influence on young women’s lives.",
"title": ""
},
{
"docid": "f3b0bace6028b3d607618e2e53294704",
"text": "State-of-the art spoken language understanding models that automatically capture user intents in human to machine dialogs are trained with manually annotated data, which is cumbersome and time-consuming to prepare. For bootstrapping the learning algorithm that detects relations in natural language queries to a conversational system, one can rely on publicly available knowledge graphs, such as Freebase, and mine corresponding data from the web. In this paper, we present an unsupervised approach to discover new user intents using a novel Bayesian hierarchical graphical model. Our model employs search query click logs to enrich the information extracted from bootstrapped models. We use the clicked URLs as implicit supervision and extend the knowledge graph based on the relational information discovered from this model. The posteriors from the graphical model relate the newly discovered intents with the search queries. These queries are then used as additional training examples to complement the bootstrapped relation detection models. The experimental results demonstrate the effectiveness of this approach, showing extended coverage to new intents without impacting the known intents.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
},
{
"docid": "296025d4851569031f0ebe36d792fadc",
"text": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT’s notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.",
"title": ""
},
{
"docid": "496ba5ee48281afe48b5afce02cc4dbf",
"text": "OBJECTIVE\nThis study examined the relationship between reported exposure to child abuse and a history of parental substance abuse (alcohol and drugs) in a community sample in Ontario, Canada.\n\n\nMETHOD\nThe sample consisted of 8472 respondents to the Ontario Mental Health Supplement (OHSUP), a comprehensive population survey of mental health. The association of self-reported retrospective childhood physical and sexual abuse and parental histories of drug or alcohol abuse was examined.\n\n\nRESULTS\nRates of physical and sexual abuse were significantly higher, with a more than twofold increased risk among those reporting parental substance abuse histories. The rates were not significantly different between type or severity of abuse. Successively increasing rates of abuse were found for those respondents who reported that their fathers, mothers or both parents had substance abuse problems; this risk was significantly elevated for both parents compared to father only with substance abuse problem.\n\n\nCONCLUSIONS\nParental substance abuse is associated with a more than twofold increase in the risk of exposure to both childhood physical and sexual abuse. While the mechanism for this association remains unclear, agencies involved in child protection or in treatment of parents with substance abuse problems must be cognizant of this relationship and focus on the development of interventions to serve these families.",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "0562b3b1692f07060cf4eeb500ea6cca",
"text": "As the volume of medicinal information stored electronically increase, so do the need to enhance how it is secured. The inaccessibility to patient record at the ideal time can prompt death toll and also well degrade the level of health care services rendered by the medicinal professionals. Criminal assaults in social insurance have expanded by 125% since 2010 and are now the leading cause of medical data breaches. This study therefore presents the combination of 3DES and LSB to improve security measure applied on medical data. Java programming language was used to develop a simulation program for the experiment. The result shows medical data can be stored, shared, and managed in a reliable and secure manner using the combined model. Keyword: Information Security; Health Care; 3DES; LSB; Cryptography; Steganography 1.0 INTRODUCTION In health industries, storing, sharing and management of patient information have been influenced by the current technology. That is, medical centres employ electronical means to support their mode of service in order to deliver quality health services. The importance of the patient record cannot be over emphasised as it contributes to when, where, how, and how lives can be saved. About 91% of health care organizations have encountered no less than one data breach, costing more than $2 million on average per organization [1-3]. Report also shows that, medical records attract high degree of importance to hoodlums compare to Mastercard information because they infer more cash base on the fact that bank",
"title": ""
},
{
"docid": "fcdde2f5b55b6d8133e6dea63d61b2c8",
"text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ···· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce",
"title": ""
},
{
"docid": "05cea038adce7f5ae2a09a7fd5e024a7",
"text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.",
"title": ""
}
] |
scidocsrr
|
0122b9fb5f10ff47ba9f9a6d8b634b3b
|
Hierarchical Reinforcement Learning for Adaptive Text Generation
|
[
{
"docid": "8640cd629e07f8fa6764c387d9fa7c29",
"text": "We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘PrecisionRecall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "85da95f8d04a8c394c320d2cce25a606",
"text": "Improved numerical weather prediction simulations have led weather services to examine how and where human forecasters add value to forecast production. The Forecast Production Assistant (FPA) was developed with that in mind. The authors discuss the Forecast Generator (FOG), the first application developed on the FPA. FOG is a bilingual report generator that produces routine and special purpose forecast directly from the FPA's graphical weather predictions. Using rules and a natural-language generator, FOG converts weather maps into forecast text. The natural-language issues involved are relevant to anyone designing a similar system.<<ETX>>",
"title": ""
},
{
"docid": "5b08a93afae9cf64b5300c586bfb3fdc",
"text": "Social interactions are characterized by distinct forms of interdependence, each of which has unique effects on how behavior unfolds within the interaction. Despite this, little is known about the psychological mechanisms that allow people to detect and respond to the nature of interdependence in any given interaction. We propose that interdependence theory provides clues regarding the structure of interdependence in the human ancestral past. In turn, evolutionary psychology offers a framework for understanding the types of information processing mechanisms that could have been shaped under these recurring conditions. We synthesize and extend these two perspectives to introduce a new theory: functional interdependence theory (FIT). FIT can generate testable hypotheses about the function and structure of the psychological mechanisms for inferring interdependence. This new perspective offers insight into how people initiate and maintain cooperative relationships, select social partners and allies, and identify opportunities to signal social motives.",
"title": ""
},
{
"docid": "01b2c742693e24e431b1bb231ae8a135",
"text": "Over the years, software development failures is really a burning issue, might be ascribed to quite a number of attributes, of which, no-compliance of users requirements and using the non suitable technique to elicit user requirements are considered foremost. In order to address this issue and to facilitate system designers, this study had filtered and compared user requirements elicitation technique, based on principles of requirements engineering. This comparative study facilitates developers to build systems based on success stories, making use of a optimistic perspective for achieving a foreseeable future. This paper is aimed at enhancing processes of choosing a suitable technique to elicit user requirements; this is crucial to determine the requirements of the user, as it enables much better software development and does not waste resources unnecessarily. Basically, this study will complement the present approaches, by representing a optimistic and potential factor for every single method in requirements engineering, which results in much better user needs, and identifies novel and distinctive specifications. Keywords— Requirements Engineering, Requirements Elicitation Techniques, Conversational methods, Observational methods, Analytic methods, Synthetic methods.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "aec5c475caa7f2e0490c871882e94363",
"text": "The use of prognostic methods in maintenance in order to predict remaining useful life is receiving more attention over the past years. The use of these techniques in maintenance decision making and optimization in multi-component systems is however a still underexplored area. The objective of this paper is to optimally plan maintenance for a multi-component system based on prognostic/predictive information while considering different component dependencies (i.e. economic, structural and stochastic dependence). Consequently, this paper presents a dynamic predictive maintenance policy for multi-component systems that minimizes the long-term mean maintenance cost per unit time. The proposed maintenance policy is a dynamic method as the maintenance schedule is updated when new information on the degradation and remaining useful life of components becomes available. The performance, regarding the objective of minimal long-term mean cost per unit time, of the developed dynamic predictive maintenance policy is compared to five other conventional maintenance policies, these are: block-based maintenance, age-based maintenance, age-based maintenance with grouping, inspection condition-based maintenance and continuous condition-based maintenance. The ability of the predictive maintenance policy to react to changing component deterioration and dependencies within a multi-component system is quantified and the results show significant cost",
"title": ""
},
{
"docid": "4e71be70e5c8c081c5ff60f8b6cb5449",
"text": "Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as one of the most promising candidates to build up a true universal memory thanks to its fast write/read speed, infinite endurance, and nonvolatility. However, the conventional access architecture based on 1 transistor + 1 memory cell limits its storage density as the selection transistor should be large enough to ensure the write current higher than the critical current for the STT operation. This paper describes a design of cross-point architecture for STT-MRAM. The mean area per word corresponds to only two transistors, which are shared by a number of bits (e.g., 64). This leads to significant improvement of data density (e.g., 1.75 F2/bit). Special techniques are also presented to address the sneak currents and low-speed issues of conventional cross-point architecture, which are difficult to surmount and few efficient design solutions have been reported in the literature. By using an STT-MRAM SPICE model including precise experimental parameters and STMicroelectronics 65 nm technology, some chip characteristic results such as cell area, data access speed, and power have been calculated or simulated to demonstrate the expected performances of this new memory architecture.",
"title": ""
},
{
"docid": "2b109799a55bcb1c0592c02b60478975",
"text": "Zero-shot learning (ZSL) is to construct recognition models for unseen target classes that have no labeled samples for training. It utilizes the class attributes or semantic vectors as side information and transfers supervision information from related source classes with abundant labeled samples. Existing ZSL approaches adopt an intermediary embedding space to measure the similarity between a sample and the attributes of a target class to perform zero-shot classification. However, this way may suffer from the information loss caused by the embedding process and the similarity measure cannot fully make use of the data distribution. In this paper, we propose a novel approach which turns the ZSL problem into a conventional supervised learning problem by synthesizing samples for the unseen classes. Firstly, the probability distribution of an unseen class is estimated by using the knowledge from seen classes and the class attributes. Secondly, the samples are synthesized based on the distribution for the unseen class. Finally, we can train any supervised classifiers based on the synthesized samples. Extensive experiments on benchmarks demonstrate the superiority of the proposed approach to the state-of-the-art ZSL approaches.",
"title": ""
},
{
"docid": "bc43482b0299fc339cf13df6e9288410",
"text": "Acute auricular hematoma is common after blunt trauma to the side of the head. A network of vessels provides a rich blood supply to the ear, and the ear cartilage receives its nutrients from the overlying perichondrium. Prompt management of hematoma includes drainage and prevention of reaccumulation. If left untreated, an auricular hematoma can result in complications such as perichondritis, infection, and necrosis. Cauliflower ear may result from long-standing loss of blood supply to the ear cartilage and formation of neocartilage from disrupted perichondrium. Management of cauliflower ear involves excision of deformed cartilage and reshaping of the auricle.",
"title": ""
},
{
"docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33",
"text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.",
"title": ""
},
{
"docid": "66d5101d55595754add37e9e50952056",
"text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines",
"title": ""
},
{
"docid": "a8e72235f2ec230a1be162fa6129db5e",
"text": "Lateral inhibition in top-down feedback is widely existing in visual neurobiology, but such an important mechanism has not be well explored yet in computer vision. In our recent research, we find that modeling lateral inhibition in convolutional neural network (LICNN) is very useful for visual attention and saliency detection. In this paper, we propose to formulate lateral inhibition inspired by the related studies from neurobiology, and embed it into the top-down gradient computation of a general CNN for classification, i.e. only category-level information is used. After this operation (only conducted once), the network has the ability to generate accurate category-specific attention maps. Further, we apply LICNN for weakly-supervised salient object detection. Extensive experimental studies on a set of databases, e.g., ECSSD, HKU-IS, PASCAL-S and DUT-OMRON, demonstrate the great advantage of LICNN which achieves the state-ofthe-art performance. It is especially impressive that LICNN with only category-level supervised information even outperforms some recent methods with segmentation-level super-",
"title": ""
},
{
"docid": "5c394c460f01c451e2ede526f73426ee",
"text": "Renal transplant recipients are at increased risk of bladder carcinoma. The aetiology is unknown but a polyoma virus (PV), BK virus (BKV), may play a role; urinary reactivation of this virus is common post-renal transplantation and PV large T-antigen (T-Ag) has transforming activity. In this study, we investigate the potential role of BKV in post-transplant urothelial carcinoma by immunostaining tumour tissue for PV T-Ag. There was no positivity for PV T-Ag in urothelial carcinomas from 20 non-transplant patients. Since 1990, 10 transplant recipients in our unit have developed urothelial carcinoma, and tumour tissue was available in eight recipients. Two patients were transplanted since the first case of PV nephropathy (PVN) was diagnosed in our unit in 2000 and both showed PV reactivation post-transplantation. In one of these patients, there was strong nuclear staining for PV T-Ag in tumour cells, with no staining of non-neoplastic urothelium. We conclude that PV infection is not associated with urothelial carcinoma in non-transplant patients, and is uncommon in transplant-associated tumours. Its presence in all tumour cells in one patient transplanted in the PVN era might suggest a possible role in tumorigenesis in that case.",
"title": ""
},
{
"docid": "186f2950bd4ce621eb0696c2fd09a468",
"text": "In this paper, I investigate the use of a disentangled VAE for downstream image classification tasks. I train a disentangled VAE in an unsupervised manner, and use the learned encoder as a feature extractor on top of which a linear classifier is learned. The models are trained and evaluated on the MNIST handwritten digits dataset. Experiments compared the disentangled VAE with both a standard (entangled) VAE and a vanilla supervised model. Results show that the disentangled VAE significantly outperforms the other two models when the proportion of labelled data is artificially reduced, while it loses this advantage when the amount of labelled data increases, and instead matches the performance of the other models. These results suggest that the disentangled VAE may be useful in situations where labelled data is scarce but unlabelled data is abundant.",
"title": ""
},
{
"docid": "538047fc099d0062ab100343b26f5cb7",
"text": "AIM\nTo examine the evidence on the association between cannabis and depression and evaluate competing explanations of the association.\n\n\nMETHODS\nA search of Medline, Psychinfo and EMBASE databases was conducted. All references in which the terms 'cannabis', 'marijuana' or 'cannabinoid', and in which the words 'depression/depressive disorder/depressed', 'mood', 'mood disorder' or 'dysthymia' were collected. Only research studies were reviewed. Case reports are not discussed.\n\n\nRESULTS\nThere was a modest association between heavy or problematic cannabis use and depression in cohort studies and well-designed cross-sectional studies in the general population. Little evidence was found for an association between depression and infrequent cannabis use. A number of studies found a modest association between early-onset, regular cannabis use and later depression, which persisted after controlling for potential confounding variables. There was little evidence of an increased risk of later cannabis use among people with depression and hence little support for the self-medication hypothesis. There have been a limited number of studies that have controlled for potential confounding variables in the association between heavy cannabis use and depression. These have found that the risk is much reduced by statistical control but a modest relationship remains.\n\n\nCONCLUSIONS\nHeavy cannabis use and depression are associated and evidence from longitudinal studies suggests that heavy cannabis use may increase depressive symptoms among some users. It is still too early, however, to rule out the hypothesis that the association is due to common social, family and contextual factors that increase risks of both heavy cannabis use and depression. Longitudinal studies and studies of twins discordant for heavy cannabis use and depression are needed to rule out common causes. If the relationship is causal, then on current patterns of cannabis use in the most developed societies cannabis use makes, at most, a modest contribution to the population prevalence of depression.",
"title": ""
},
{
"docid": "3b78223f5d11a56dc89a472daf23ca49",
"text": "Shadow maps provide a fast and convenient method of identifying shadows in scenes but can introduce aliasing. This paper introduces the Adaptive Shadow Map (ASM) as a solution to this problem. An ASM removes aliasing by resolving pixel size mismatches between the eye view and the light source view. It achieves this goal by storing the light source view (i.e., the shadow map for the light source) as a hierarchical grid structure as opposed to the conventional flat structure. As pixels are transformed from the eye view to the light source view, the ASM is refined to create higher-resolution pieces of the shadow map when needed. This is done by evaluating the contributions of shadow map pixels to the overall image quality. The improvement process is view-driven, progressive, and confined to a user-specifiable memory footprint. We show that ASMs enable dramatic improvements in shadow quality while maintaining interactive rates.",
"title": ""
},
{
"docid": "0e5a11ef4daeb969702e40ea0c50d7f3",
"text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).",
"title": ""
},
{
"docid": "77bbd6d3e1f1ae64bda32cd057cf0580",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "8a9680ae0d35a1c53773ccf7dcef4df7",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "2d82220d88794093209aa4b8151e70d9",
"text": "Iterative Hard Thresholding (IHT) is a class of projected gradient descent methods for optimizing sparsity-constrained minimization models, with the best known efficiency and scalability in practice. As far as we know, the existing IHT-style methods are designed for sparse minimization in primal form. It remains open to explore duality theory and algorithms in such a non-convex and NP-hard problem setting. In this paper, we bridge this gap by establishing a duality theory for sparsity-constrained minimization with `2-regularized loss function and proposing an IHT-style algorithm for dual maximization. Our sparse duality theory provides a set of sufficient and necessary conditions under which the original NP-hard/non-convex problem can be equivalently solved in a dual formulation. The proposed dual IHT algorithm is a super-gradient method for maximizing the non-smooth dual objective. An interesting finding is that the sparse recovery performance of dual IHT is invariant to the Restricted Isometry Property (RIP), which is required by virtually all the existing primal IHT algorithms without sparsity relaxation. Moreover, a stochastic variant of dual IHT is proposed for large-scale stochastic optimization. Numerical results demonstrate the superiority of dual IHT algorithms to the state-of-the-art primal IHT-style algorithms in model estimation accuracy and computational efficiency.",
"title": ""
},
{
"docid": "225ac2816e26f156b16ad65401fcbaf6",
"text": "This paper investigates how internet users’ perception of control over their personal information affects how likely they are to click on online advertising on a social networking website. The paper uses data from a randomized field experiment that examined the effectiveness of personalizing ad text with user-posted personal information relative to generic text. The website gave users more control over their personally identifiable information in the middle of the field test. However, the website did not change how advertisers used data to target and personalize ads. Before the policy change, personalized ads did not perform particularly well. However, after this enhancement of perceived control over privacy, users were nearly twice as likely to click on personalized ads. Ads that targeted but did not use personalized text remained unchanged in effectiveness. The increase in effectiveness was larger for ads that used more unique private information to personalize their message and for target groups who were more likely to use opt-out privacy settings.",
"title": ""
}
] |
scidocsrr
|
fb1dac0bee58d622f78bb84c1f832af7
|
Association between online social networking and depression in high school students: behavioral physiology viewpoint.
|
[
{
"docid": "89c9ad792245fc7f7e7e3b00c1e8147a",
"text": "Contrasting hypotheses were posed to test the effect of Facebook exposure on self-esteem. Objective Self-Awareness (OSA) from social psychology and the Hyperpersonal Model from computer-mediated communication were used to argue that Facebook would either diminish or enhance self-esteem respectively. The results revealed that, in contrast to previous work on OSA, becoming self-aware by viewing one's own Facebook profile enhances self-esteem rather than diminishes it. Participants that updated their profiles and viewed their own profiles during the experiment also reported greater self-esteem, which lends additional support to the Hyperpersonal Model. These findings suggest that selective self-presentation in digital media, which leads to intensified relationship formation, also influences impressions of the self.",
"title": ""
}
] |
[
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "f82ce890d66c746a169a38fdad702749",
"text": "The following review paper presents an overview of the current crop yield forecasting methods and early warning systems for the global strategy to improve agricultural and rural statistics across the globe. Different sections describing simulation models, remote sensing, yield gap analysis, and methods to yield forecasting compose the manuscript. 1. Rationale Sustainable land management for crop production is a hierarchy of systems operating in— and interacting with—economic, ecological, social, and political components of the Earth. This hierarchy ranges from a field managed by a single farmer to regional, national, and global scales where policies and decisions influence crop production, resource use, economics, and ecosystems at other levels. Because sustainability concepts must integrate these diverse issues, agricultural researchers who wish to develop sustainable productive systems and policy makers who attempt to influence agricultural production are confronted with many challenges. A multiplicity of problems can prevent production systems from being sustainable; on the other hand, with sufficient attention to indicators of sustainability, a number of practices and policies could be implemented to accelerate progress. Indicators to quantify changes in crop production systems over time at different hierarchical levels are needed for evaluating the sustainability of different land management strategies. To develop and test sustainability concepts and yield forecast methods globally, it requires the implementation of long-term crop and soil management experiments that include measurements of crop yields, soil properties, biogeochemical fluxes, and relevant socioeconomic indicators. Long-term field experiments cannot be conducted with sufficient detail in space and time to find the best land management practices suitable for sustainable crop production. Crop and soil simulation models, when suitably tested in reasonably diverse space and time, provide a critical tool for finding combinations of management strategies to reach multiple goals required for sustainable crop production. The models can help provide land managers and policy makers with a tool to extrapolate experimental results from one location to others where there is a lack of response information. Agricultural production is significantly affected by environmental factors. Weather influences crop growth and development, causing large intra-seasonal yield variability. In addition, spatial variability of soil properties, interacting with the weather, cause spatial yield variability. Crop agronomic management (e.g. planting, fertilizer application, irrigation, tillage, and so on) can be used to offset the loss in yield due to effects of weather. As a result, yield forecasting represents an important tool for optimizing crop yield and to evaluate the crop-area insurance …",
"title": ""
},
{
"docid": "f6deeee48e0c8f1ed1d922093080d702",
"text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.",
"title": ""
},
{
"docid": "001104ca832b10553b28bbd713e6cbd5",
"text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"title": ""
},
{
"docid": "e82cd7c22668b0c9ed62b4afdf49d1f4",
"text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.",
"title": ""
},
{
"docid": "e095b0d96a6c0dcc87efbbc3e730b581",
"text": "In this paper, we present ObSteiner, an exact algorithm for the construction of obstacle-avoiding rectilinear Steiner minimum trees (OARSMTs) among complex rectilinear obstacles. This is the first paper to propose a geometric approach to optimally solve the OARSMT problem among complex obstacles. The optimal solution is constructed by the concatenation of full Steiner trees among complex obstacles, which are proven to be of simple structures in this paper. ObSteiner is able to handle complex obstacles, including both convex and concave ones. Benchmarks with hundreds of terminals among a large number of obstacles are solved optimally in a reasonable amount of time.",
"title": ""
},
{
"docid": "c05b6720cdfdf6170ccce6486d485dc0",
"text": "The naturalness of warps is gaining extensive attention in image stitching. Recent warps, such as SPHP and AANAP, use global similarity warps to mitigate projective distortion (which enlarges regions); however, they necessarily bring in perspective distortion (which generates inconsistencies). In this paper, we propose a novel quasi-homography warp, which effectively balances the perspective distortion against the projective distortion in the non-overlapping region to create a more natural-looking panorama. Our approach formulates the warp as the solution of a bivariate system, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization, respectively. Because our proposed warp only relies on a global homography, it is thus totally parameter free. A comprehensive experiment shows that a quasi-homography warp outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that it wins most users’ favor, compared to homography and SPHP.",
"title": ""
},
{
"docid": "046245929e709ef2935c9413619ab3d7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d1eae0f247f1c2db9e3c544a65c041f",
"text": "This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.",
"title": ""
},
{
"docid": "38e6384522c9e3e961819ed5b00a7697",
"text": "Cloud gaming has been recognized as a promising shift in the online game industry, with the aim of implementing the “on demand” service concept that has achieved market success in other areas of digital entertainment such as movies and TV shows. The concepts of cloud computing are leveraged to render the game scene as a video stream that is then delivered to players in real-time. The main advantage of this approach is the capability of delivering high-quality graphics games to any type of end user device; however, at the cost of high bandwidth consumption and strict latency requirements. A key challenge faced by cloud game providers lies in configuring the video encoding parameters so as to maximize player Quality of Experience (QoE) while meeting bandwidth availability constraints. In this article, we tackle one aspect of this problem by addressing the following research question: Is it possible to improve service adaptation based on information about the characteristics of the game being streamed? To answer this question, two main challenges need to be addressed: the need for different QoE-driven video encoding (re-)configuration strategies for different categories of games, and how to determine a relevant game categorization to be used for assigning appropriate configuration strategies. We investigate these problems by conducting two subjective laboratory studies with a total of 80 players and three different games. Results indicate that different strategies should likely be applied for different types of games, and show that existing game classifications are not necessarily suitable for differentiating game types in this context. We thus further analyze objective video metrics of collected game play video traces as well as player actions per minute and use this as input data for clustering of games into two clusters. Subjective results verify that different video encoding configuration strategies may be applied to games belonging to different clusters.",
"title": ""
},
{
"docid": "93ea7c59bad8181b0379f39e00f4d2e8",
"text": "Breadth-First Search (BFS) is a key graph algorithm with many important applications. In this work, we focus on a special class of graph traversal algorithm - concurrent BFS - where multiple breadth-first traversals are performed simultaneously on the same graph. We have designed and developed a new approach called iBFS that is able to run i concurrent BFSes from i distinct source vertices, very efficiently on Graphics Processing Units (GPUs). iBFS consists of three novel designs. First, iBFS develops a single GPU kernel for joint traversal of concurrent BFS to take advantage of shared frontiers across different instances. Second, outdegree-based GroupBy rules enables iBFS to selectively run a group of BFS instances which further maximizes the frontier sharing within such a group. Third, iBFS brings additional performance benefit by utilizing highly optimized bitwise operations on GPUs, which allows a single GPU thread to inspect a vertex for concurrent BFS instances. The evaluation on a wide spectrum of graph benchmarks shows that iBFS on one GPU runs up to 30x faster than executing BFS instances sequentially, and on 112 GPUs achieves near linear speedup with the maximum performance of 57,267 billion traversed edges per second (TEPS).",
"title": ""
},
{
"docid": "a0566ac90d164db763c7efa977d4bc0d",
"text": "Dead-time controls for synchronous buck converter are challenging due to the difficulties in accurate sensing and processing the on/off dead-time errors. For the control of dead-times, an integral feedback control using switched capacitors and a fast timing sensing circuit composed of MOSFET differential amplifiers and switched current sources are proposed. Experiments for a 3.3 V input, 1.5 V-0.3 A output converter demonstrated 1.3 ~ 4.6% efficiency improvement over a wide load current range.",
"title": ""
},
{
"docid": "ce5ede79daee56d50f5b086ad8f18a28",
"text": "In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the linear support vector machine has faster convergence speed and higher testing precision on five datasets (Alpha, Gamma, Delta, Mnist, Usps).",
"title": ""
},
{
"docid": "dd732081865bb209276acd3bb76ee08f",
"text": "A 57-64-GHz low phase-error 5-bit switch-type phase shifter integrated with a low phase-variation variable gain amplifier (VGA) is implemented through TSMC 90-nm CMOS low-power technology. Using the phase compensation technique, the proposed VGA can provide appropriate gain tuning with almost constant phase characteristics, thus greatly reducing the phase-tuning complexity in a phased-array system. The measured root mean square (rms) phase error of the 5-bit phase shifter is 2° at 62 GHz. The phase shifter has a low group-delay deviation (phase distortion) of +/- 8.5 ps and an excellent insertion loss flatness of ±0.8 dB for a specific phase-shifting state, across 57-64 GHz. For all 32 states, the insertion loss is 14.6 ± 3 dB, including pad loss at 60 GHz. For the integrated phase shifter and VGA, the VGA can provide 6.2-dB gain tuning range, which is wide enough to cover the loss variation of the phase shifter, with only 1.86° phase variation. The measured rms phase error of the 5-bit phase shifter and VGA is 3.8° at 63 GHz. The insertion loss of all 32 states is 5.4 dB, including pad loss at 60 GHz, and the loss flatness is ±0.8 dB over 57-64 GHz. To the best of our knowledge, the 5-bit phase shifter presents the best rms phase error at center frequency among the V-band switch-type phase shifter.",
"title": ""
},
{
"docid": "646a1e7c1a71dc89fa92d76a19c7389e",
"text": "As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality system-atically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: (1) the GPU's hierarchy of threads, warps, threadblocks, and sets of active threads, (2) conditional and non-uniform latencies, (3) cache associativity, (4) miss-status holding-registers, and (5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator.",
"title": ""
},
{
"docid": "ec772eccaa45eb860582820e751f3415",
"text": "Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.",
"title": ""
},
{
"docid": "db61ab44bfb0e7eddf2959121a79a2ee",
"text": "This paper analyzes the supply and demand for Bitcoinbased Ponzi schemes. There are a variety of these types of scams: from long cons such as Bitcoin Savings & Loans to overnight doubling schemes that do not take off. We investigate what makes some Ponzi schemes successful and others less so. By scouring 11 424 threads on bitcointalk. org, we identify 1 780 distinct scams. Of these, half lasted a week or less. Using survival analysis, we identify factors that affect scam persistence. One approach that appears to elongate the life of the scam is when the scammer interacts a lot with their victims, such as by posting more than a quarter of the comments in the related thread. By contrast, we also find that scams are shorter-lived when the scammers register their account on the same day that they post about their scam. Surprisingly, more daily posts by victims is associated with the scam ending sooner.",
"title": ""
},
{
"docid": "35a063ab339f32326547cc54bee334be",
"text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.",
"title": ""
},
{
"docid": "6de71e8106d991d2c3d2b845a9e0a67e",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "0f3d520a6d09c136816a9e0493c45db1",
"text": "Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.",
"title": ""
}
] |
scidocsrr
|
ca2584c9be2200d80892a7708347c83b
|
An Investigation of the Role of Dependency in Predicting continuance Intention to Use Ubiquitous Media Systems: Combining a Media system Perspective with Expectation-confirmation Theories
|
[
{
"docid": "e83e6284d3c9cf8fddf972a25d869a1b",
"text": "Internet-based learning systems are being used in many universities and firms but their adoption requires a solid understanding of the user acceptance processes. Our effort used an extended version of the technology acceptance model (TAM), including cognitive absorption, in a formal empirical study to explain the acceptance of such systems. It was intended to provide insight for improving the assessment of on-line learning systems and for enhancing the underlying system itself. The work involved the examination of the proposed model variables for Internet-based learning systems acceptance. Using an on-line learning system as the target technology, assessment of the psychometric properties of the scales proved acceptable and confirmatory factor analysis supported the proposed model structure. A partial-least-squares structural modeling approach was used to evaluate the explanatory power and causal links of the model. Overall, the results provided support for the model as explaining acceptance of an on-line learning system and for cognitive absorption as a variable that influences TAM variables. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1b82e1fa8619480ba194c83c5370da5d",
"text": "This study presents an extended technology acceptance model (TAM) that integrates innovation diffusion theory, perceived risk and cost into the TAM to investigate what determines user mobile commerce (MC) acceptance. The proposed model was empirically tested using data collected from a survey of MC consumers. The structural equation modeling technique was used to evaluate the causal model and confirmatory factor analysis was performed to examine the reliability and validity of the measurement model. Our findings indicated that all variables except perceived ease of use significantly affected users’ behavioral intent. Among them, the compatibility had the most significant influence. Furthermore, a striking, and somewhat puzzling finding was the positive influence of perceived risk on behavioral intention to use. The implication of this work to both researchers and practitioners is discussed. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ca8aa3e930fd36a16ac36546a25a1fde",
"text": "Accurate State-of-Charge (SOC) estimation of Li-ion batteries is essential for effective battery control and energy management of electric and hybrid electric vehicles. To this end, first, the battery is modelled by an OCV-R-RC equivalent circuit. Then, a dual Bayesian estimation scheme is developed-The battery model parameters are identified online and fed to the SOC estimator, the output of which is then fed back to the parameter identifier. Both parameter identification and SOC estimation are treated in a Bayesian framework. The square-root recursive least-squares estimator and the extended Kalman-Bucy filter are systematically paired up for the first time in the battery management literature to tackle the SOC estimation problem. The proposed method is finally compared with the convectional Coulomb counting method. The results indicate that the proposed method significantly outperforms the Coulomb counting method in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "e3de7dc210e780e1c460a505628ea4ed",
"text": "We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.\n We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence.",
"title": ""
},
{
"docid": "1262ce9e36e4208a1d8e641e5078e083",
"text": "D its fundamental role in legitimizing the modern state system, nationalism has rarely been linked to the outbreak of political violence in the recent literature on ethnic conflict and civil war. to a large extent, this is because the state is absent from many conventional theories of ethnic conflict. indeed, some studies analyze conflict between ethnic groups under conditions of state failure, thus making the absence of the state the very core of the causal argument. others assume that the state is ethnically neutral and try to relate ethnodemographic measures, such as fractionalization and polarization, to civil war. in contrast to these approaches, we analyze the state as an institution that is captured to different degrees by representatives of particular ethnic communities, and thus we conceive of ethnic wars as the result of competing ethnonationalist claims to state power. While our work relates to a rich research tradition that links the causes of such conflicts to the mobilization of ethnic minorities, it also goes beyond this tradition by introducing a new data set that addresses some of the shortcomings of this tradition. our analysis is based on the Ethnic power relations data set (epr), which covers all politically relevant ethnic groups and their access to power around the world from 1946 through 2005. this data set improves significantly on the widely used minorities at risk data set, which restricts its sample to mobilized",
"title": ""
},
{
"docid": "2dd42cce112c61950b96754bb7b4df10",
"text": "Hierarchical methods have been widely explored for object recognition, which is a critical component of scene understanding. However, few existing works are able to model the contextual information (e.g., objects co-occurrence) explicitly within a single coherent framework for scene understanding. Towards this goal, in this paper we propose a novel three-level (superpixel level, object level and scene level) hierarchical model to address the scene categorization problem. Our proposed model is a coherent probabilistic graphical model that captures the object co-occurrence information for scene understanding with a probabilistic chain structure. The efficacy of the proposed model is demonstrated by conducting experiments on the LabelMe dataset.",
"title": ""
},
{
"docid": "385c7c16af40ae13b965938ac3bce34c",
"text": "The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods. This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines. The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.",
"title": ""
},
{
"docid": "a1cd4a4ce70c9c8672eee5ffc085bf63",
"text": "Ternary logic is a promising alternative to conventional binary logic, since it is possible to achieve simplicity and energy efficiency due to the reduced circuit overhead. In this paper, a ternary magnitude comparator design based on Carbon Nanotube Field Effect Transistors (CNFETs) is presented. This design eliminates the usage of complex ternary decoder which is a part of existing designs. Elimination of decoder results in reduction of delay and power. Simulations of proposed and existing designs are done on HSPICE and results proves that the proposed 1-bit comparator consumes 81% less power and shows delay advantage of 41.6% compared to existing design. Further a methodology to extend the 1-bit comparator design to n-bit comparator design is also presented.",
"title": ""
},
{
"docid": "0c4f02b3b361d60da1aec0f0c100dcf9",
"text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.",
"title": ""
},
{
"docid": "e1d3708e826499d7f2e656b66303734f",
"text": "Entity Resolution constitutes a core task for data integration that, due to its quadratic complexity, typically scales to large datasets through blocking methods. These can be configured in two ways. The schema-based configuration relies on schema information in order to select signatures of high distinctiveness and low noise, while the schema-agnostic one treats every token from all attribute values as a signature. The latter approach has significant potential, as it requires no fine-tuning by human experts and it applies to heterogeneous data. Yet, there is no systematic study on its relative performance with respect to the schema-based configuration. This work covers this gap by comparing analytically the two configurations in terms of effectiveness, time efficiency and scalability. We apply them to 9 established blocking methods and to 11 benchmarks of structured data. We provide valuable insights into the internal functionality of the blocking methods with the help of a novel taxonomy. Our studies reveal that the schema-agnostic configuration offers unsupervised and robust definition of blocking keys under versatile settings, trading a higher computational cost for a consistently higher recall than the schema-based one. It also enables the use of state-of-the-art blocking methods without schema knowledge.",
"title": ""
},
{
"docid": "81d4baaf6a22a7a480e4568ae05de1db",
"text": "Procedural textures are normally generated from mathematical models with parameters carefully selected by experienced users. However, for naive users, the intuitive way to obtain a desired texture is to provide semantic descriptions such as ”regular,” ”lacelike,” and ”repetitive” and then a procedural model with proper parameters will be automatically suggested to generate the corresponding textures. By contrast, it is less practical for users to learn mathematical models and tune parameters based on multiple examinations of large numbers of generated textures. In this study, we propose a novel framework that generates procedural textures according to user-defined semantic descriptions, and we establish a mapping between procedural models and semantic texture descriptions. First, based on a vocabulary of semantic attributes collected from psychophysical experiments, a multi-label learning method is employed to annotate a large number of textures with semantic attributes to form a semantic procedural texture dataset. Then, we derive a low dimensional semantic space in which the semantic descriptions can be separated from one other. Finally, given a set of semantic descriptions, the diverse properties of the samples in the semantic space can lead the framework to find an appropriate generation model that uses appropriate parameters to produce a desired texture. The experimental results show that the proposed framework is effective and that the generated textures closely correlate with the input semantic descriptions.",
"title": ""
},
{
"docid": "b4a8541c2870ea3d91819c0c0de68ad3",
"text": "The paper will describe various types of security issues which include confidentality, integrity and availability of data. There exists various threats to security issues traffic analysis, snooping, spoofing, denial of service attack etc. The asymmetric key encryption techniques may provide a higher level of security but compared to the symmetric key encryption Although we have existing techniques symmetric and assymetric key cryptography methods but there exists security concerns. A brief description of proposed framework is defined which uses the random combination of public and private keys. The mechanisms includes: Integrity, Availability, Authentication, Nonrepudiation, Confidentiality and Access control which is achieved by private-private key model as the user is restricted both at sender and reciever end which is restricted in other models. A review of all these systems is described in this paper.",
"title": ""
},
{
"docid": "9edf40bfd6875591543ff46e5e211c74",
"text": "The brain is thought to sense gut stimuli only via the passive release of hormones. This is because no connection has been described between the vagus and the putative gut epithelial sensor cell—the enteroendocrine cell. However, these electrically excitable cells contain several features of epithelial transducers. Using a mouse model, we found that enteroendocrine cells synapse with vagal neurons to transduce gut luminal signals in milliseconds by using glutamate as a neurotransmitter. These synaptically connected enteroendocrine cells are referred to henceforth as neuropod cells. The neuroepithelial circuit they form connects the intestinal lumen to the brainstem in one synapse, opening a physical conduit for the brain to sense gut stimuli with the temporal precision and topographical resolution of a synapse.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "6fa6a26b351c45ac5f33f565bc9c01e8",
"text": "Transfer learning, or inductive transfer, refers to the transfer of knowledge from a source task to a target task. In the context of convolutional neural networks (CNNs), transfer learning can be implemented by transplanting the learned feature layers from one CNN (derived from the source task) to initialize another (for the target task). Previous research has shown that the choice of the source CNN impacts the performance of the target task. In the current literature, there is no principled way for selecting a source CNN for a given target task despite the increasing availability of pre-trained source CNNs. In this paper we investigate the possibility of automatically ranking source CNNs prior to utilizing them for a target task. In particular, we present an information theoretic framework to understand the source-target relationship and use this as a basis to derive an approach to automatically rank source CNNs in an efficient, zero-shot manner. The practical utility of the approach is thoroughly evaluated using the PlacesMIT dataset, MNIST dataset and a real-world MRI database. Experimental results demonstrate the efficacy of the proposed ranking method for transfer learning.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "9a82f33d84cd622ccd66a731fc9755de",
"text": "To discover relationships and associations between pairs of variables in large data sets have become one of the most significant challenges for bioinformatics scientists. To tackle this problem, maximal information coefficient (MIC) is widely applied as a measure of the linear or non-linear association between two variables. To improve the performance of MIC calculation, in this work we present MIC++, a parallel approach based on the heterogeneous accelerators including Graphic Processing Unit (GPU) and Field Programmable Gate Array (FPGA) engines, focusing on both coarse-grained and fine-grained parallelism. As the evaluation of MIC++, we have demonstrated the performance on the state-of-the-art GPU accelerators and the FPGA-based accelerators. Preliminary estimated results show that the proposed parallel implementation can significantly achieve more than 6X-14X speedup using GPU, and 4X-13X using FPGA-based accelerators.",
"title": ""
},
{
"docid": "b0d9c5716052e9cfe9d61d20e5647c8c",
"text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.",
"title": ""
},
{
"docid": "ce53bf5131c125fdca2086e28ccca9d7",
"text": "When a firm practices conservative accounting, changes in the amount of its investments can affect the quality of its earnings. Growth in investment reduces reported earnings and creates reserves. Reducing investment releases those reserves, increasing earnings. If the change in investment is temporary, then current earnings is temporarily depressed or inflated, and thus is not a good indicator of future earnings. This study develops diagnostic measures of this joint effect of investment and conservative accounting. We find that these measures forecast differences in future return on net operating assets relative to current return on net operating assets. Moreover, these measures also forecast stock returns-indicating that investors do not appreciate how conservatism and changes in investment combine to raise questions about the quality of reported earnings.",
"title": ""
},
{
"docid": "6e4f71c411a57e3f705dbd0979c118b1",
"text": "BACKGROUND\nStress perception is highly subjective, and so the complexity of nursing practice may result in variation between nurses in their identification of sources of stress, especially when the workplace and roles of nurses are changing, as is currently occurring in the United Kingdom health service. This could have implications for measures being introduced to address problems of stress in nursing.\n\n\nAIMS\nTo identify nurses' perceptions of workplace stress, consider the potential effectiveness of initiatives to reduce distress, and identify directions for future research.\n\n\nMETHOD\nA literature search from January 1985 to April 2003 was conducted using the key words nursing, stress, distress, stress management, job satisfaction, staff turnover and coping to identify research on sources of stress in adult and child care nursing. Recent (post-1997) United Kingdom Department of Health documents and literature about the views of practitioners was also consulted.\n\n\nFINDINGS\nWorkload, leadership/management style, professional conflict and emotional cost of caring have been the main sources of distress for nurses for many years, but there is disagreement as to the magnitude of their impact. Lack of reward and shiftworking may also now be displacing some of the other issues in order of ranking. Organizational interventions are targeted at most but not all of these sources, and their effectiveness is likely to be limited, at least in the short to medium term. Individuals must be supported better, but this is hindered by lack of understanding of how sources of stress vary between different practice areas, lack of predictive power of assessment tools, and a lack of understanding of how personal and workplace factors interact.\n\n\nCONCLUSIONS\nStress intervention measures should focus on stress prevention for individuals as well as tackling organizational issues. Achieving this will require further comparative studies, and new tools to evaluate the intensity of individual distress.",
"title": ""
},
{
"docid": "517a7833e209403cb3db6f3e58c5f3e4",
"text": "Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.",
"title": ""
}
] |
scidocsrr
|
a2d04f9748040ba26485b311176ecc8a
|
Very High Frequency PWM Buck Converters Using Monolithic GaN Half-Bridge Power Stages With Integrated Gate Drivers
|
[
{
"docid": "e09d142b072122da62ebe79650f42cc5",
"text": "This paper describes a synchronous buck converter based on a GaN-on-SiC integrated circuit, which includes a halfbridge power stage, as well as a modified active pull-up gate driver stage. The integrated modified active pull-up driver takes advantage of depletion-mode device characteristics to achieve fast switching with low power consumption. Design principles and results are presented for a synchronous buck converter prototype operating at 100 MHz switching frequency, delivering up to 7 W from 20 V input voltage. Measured power-stage efficiency peaks above 91%, and remains above 85% over a wide range of operating conditions. Experimental results show that the converter has the ability to accurately track a 20 MHz bandwidth LTE envelope signal with 83.7% efficiency.",
"title": ""
},
{
"docid": "3f77b59dc39102eb18e31dbda0578ecb",
"text": "GaN high electron mobility transistors (HEMTs) are well suited for high-frequency operation due to their lower on resistance and device capacitance compared with traditional silicon devices. When grown on silicon carbide, GaN HEMTs can also achieve very high power density due to the enhanced power handling capabilities of the substrate. As a result, GaN-on-SiC HEMTs are increasingly popular in radio-frequency power amplifiers, and applications as switches in high-frequency power electronics are of high interest. This paper explores the use of GaN-on-SiC HEMTs in conventional pulse-width modulated switched-mode power converters targeting switching frequencies in the tens of megahertz range. Device sizing and efficiency limits of this technology are analyzed, and design principles and guidelines are given to exploit the capabilities of the devices. The results are presented for discrete-device and integrated implementations of a synchronous Buck converter, providing more than 10-W output power supplied from up to 40 V with efficiencies greater than 95% when operated at 10 MHz, and greater than 90% at switching frequencies up to 40 MHz. As a practical application of this technology, the converter is used to accurately track a 3-MHz bandwidth communication envelope signal with 92% efficiency.",
"title": ""
}
] |
[
{
"docid": "172f206c8b3b0bc0d75793a13fa9ef88",
"text": "Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.",
"title": ""
},
{
"docid": "326cb7464df9c9361be4e27d82f61455",
"text": "We implemented an attack against WEP, the link-layer security protocol for 802.11 networks. The attack was described in a recent paper by Fluhrer, Mantin, and Shamir. With our implementation, and permission of the network administrator, we were able to recover the 128 bit secret key used in a production network, with a passive attack. The WEP standard uses RC4 IVs improperly, and the attack exploits this design failure. This paper describes the attack, how we implemented it, and some optimizations to make the attack more efficient. We conclude that 802.11 WEP is totally insecure, and we provide some recommendations.",
"title": ""
},
{
"docid": "e0633afb6f4dcb1561dbb23b6e3aa713",
"text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.",
"title": ""
},
{
"docid": "8da0bdec21267924d16f9a04e6d9a7ef",
"text": "Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section).",
"title": ""
},
{
"docid": "44abac09424c717f3a691e4ba2640c1a",
"text": "In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "28fcdd3282dd57c760e9e2628764c0f8",
"text": "Constructing a valid measure of presence and discovering the factors that contribute to presence have been much sought after goals of presence researchers and at times have generated controversy among them. This paper describes the results of principal-components analyses of Presence Questionnaire (PQ) data from 325 participants following exposure to immersive virtual environments. The analyses suggest that a 4-factor model provides the best fit to our data. The factors are Involvement, Adaptation/Immersion, Sensory Fidelity, and Interface Quality. Except for the Adaptation/Immersion factor, these factors corresponded to those identified in a cluster analysis of data from an earlier version of the questionnaire. The existence of an Adaptation/Immersion factor leads us to postulate that immersion is greater for those individuals who rapidly and easily adapt to the virtual environment. The magnitudes of the correlations among the factors indicate moderately strong relationships among the 4 factors. Within these relationships, Sensory Fidelity items seem to be more closely related to Involvement, whereas Interface Quality items appear to be more closely related to Adaptation/Immersion, even though there is a moderately strong relationship between the Involvement and Adaptation/Immersion factors.",
"title": ""
},
{
"docid": "b08027d8febf1d7f8393b9934739847d",
"text": "Sarcasm is generally characterized as a figure of speech that involves the substitution of a literal by a figurative meaning, which is usually the opposite of the original literal meaning. We re-frame the sarcasm detection task as a type of word sense disambiguation problem, where the sense of a word is either literal or sarcastic. We call this the Literal/Sarcastic Sense Disambiguation (LSSD) task. We address two issues: 1) how to collect a set of target words that can have either literal or sarcastic meanings depending on context; and 2) given an utterance and a target word, how to automatically detect whether the target word is used in the literal or the sarcastic sense. For the latter, we investigate several distributional semantics methods and show that a Support Vector Machines (SVM) classifier with a modified kernel using word embeddings achieves a 7-10% F1 improvement over a strong lexical baseline.",
"title": ""
},
{
"docid": "653b9148a229bd8b2c1909d98d67e7a4",
"text": "In this work, a beam switched antenna system based on a planar connected antenna array (CAA) is proposed at 28 GHz for 5G applications. The antenna system consists of a 4 × 4 connected slot antenna elements. It is covering frequency band from 27.4 GHz to 28.23 GHz with at least −10dB bandwidth of 830 MHz. It is modeled on a commercially available RO3003 substrate with ∊r equal to 3.3. The dimensions of the board are equal to 61×54×0.13 mm3. The proposed design is compact and low profile. A Butler matrix based feed network is used to steer the beam at different locations.",
"title": ""
},
{
"docid": "fb0b06eb6238c008bef7d3b2e9a80792",
"text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.",
"title": ""
},
{
"docid": "a00fe5032a5e1835120135e6e504d04b",
"text": "Perfect information Monte Carlo (PIMC) search is the method of choice for constructing strong Al systems for trick-taking card games. PIMC search evaluates moves in imperfect information games by repeatedly sampling worlds based on state inference and estimating move values by solving the corresponding perfect information scenarios. PIMC search performs well in trick-taking card games despite the fact that it suffers from the strategy fusion problem, whereby the game's information set structure is ignored because moves are evaluated opportunistically in each world. In this paper we describe imperfect information Monte Carlo (IIMC) search, which aims at mitigating this problem by basing move evaluation on more realistic playout sequences rather than perfect information move values. We show that RecPIMC - a recursive IIMC search variant based on perfect information evaluation - performs considerably better than PIMC search in a large class of synthetic imperfect information games and the popular card game of Skat, for which PIMC search is the state-of-the-art cardplay algorithm.",
"title": ""
},
{
"docid": "1f7bd85c5b28f97565d8b38781e875ab",
"text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.",
"title": ""
},
{
"docid": "e05ea52ecf42826e73ed7095ed162557",
"text": "This paper aims at detecting and recognizing fish species from underwater images by means of Fast R-CNN (Regions with Convolutional Neural and Networks) features. Encouraged by powerful recognition results achieved by Convolutional Neural Networks (CNNs) on generic VOC and ImageNet dataset, we apply this popular deep ConvNets to domain-specific underwater environment which is more complicated than overland situation, using a new dataset of 24277 ImageCLEF fish images belonging to 12 classes. The experimental results demonstrate the promising performance of our networks. Fast R-CNN improves mean average precision (mAP) by 11.2% relative to Deformable Parts Model (DPM) baseline-achieving a mAP of 81.4%, and detects 80× faster than previous R-CNN on a single fish image.",
"title": ""
},
{
"docid": "19acedd03589d1fd1173dd1565d11baf",
"text": "This is the first report on the microbial diversity of xaj-pitha, a rice wine fermentation starter culture through a metagenomics approach involving Illumine-based whole genome shotgun (WGS) sequencing method. Metagenomic DNA was extracted from rice wine starter culture concocted by Ahom community of Assam and analyzed using a MiSeq® System. A total of 2,78,231 contigs, with an average read length of 640.13 bp, were obtained. Data obtained from the use of several taxonomic profiling tools were compared with previously reported microbial diversity studies through the culture-dependent and culture-independent method. The microbial community revealed the existence of amylase producers, such as Rhizopus delemar, Mucor circinelloides, and Aspergillus sp. Ethanol producers viz., Meyerozyma guilliermondii, Wickerhamomyces ciferrii, Saccharomyces cerevisiae, Candida glabrata, Debaryomyces hansenii, Ogataea parapolymorpha, and Dekkera bruxellensis, were found associated with the starter culture along with a diverse range of opportunistic contaminants. The bacterial microflora was dominated by lactic acid bacteria (LAB). The most frequent occurring LAB was Lactobacillus plantarum, Lactobacillus brevis, Leuconostoc lactis, Weissella cibaria, Lactococcus lactis, Weissella para mesenteroides, Leuconostoc pseudomesenteroides, etc. Our study provided a comprehensive picture of microbial diversity associated with rice wine fermentation starter and indicated the superiority of metagenomic sequencing over previously used techniques.",
"title": ""
},
{
"docid": "9f7aaba61ef395f85252820edae5db1b",
"text": "Theory and research on sex differences in adjustment focus largely on parental, societal, and biological influences. However, it also is important to consider how peers contribute to girls' and boys' development. This article provides a critical review of sex differences in several peer relationship processes, including behavioral and social-cognitive styles, stress and coping, and relationship provisions. The authors present a speculative peer-socialization model based on this review in which the implications of these sex differences for girls' and boys' emotional and behavioral development are considered. Central to this model is the idea that sex-linked relationship processes have costs and benefits for girls' and boys' adjustment. Finally, the authors present recent research testing certain model components and propose approaches for testing understudied aspects of the model.",
"title": ""
},
{
"docid": "89ed5dc0feb110eb3abc102c4e50acaf",
"text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.",
"title": ""
},
{
"docid": "4ecc49bb99ade138783899b6f9b47f16",
"text": "This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We nd that in this task model-based approaches support reinforcement learning from smaller amounts of training data and eecient handling of changing goals.",
"title": ""
},
{
"docid": "f0af0497727f2256aa52b30c3a7f64d1",
"text": "This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles' diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle's position, which enhanced the particles' capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.",
"title": ""
},
{
"docid": "131c163caef9ab345eada4b2d423aa9d",
"text": "Text pre-processing of Arabic Language is a challenge and crucial stage in Text Categorization (TC) particularly and Text Mining (TM) generally. Stemming algorithms can be employed in Arabic text preprocessing to reduces words to their stems/or root. Arabic stemming algorithms can be ranked, according to three category, as root-based approach (ex. Khoja); stem-based approach (ex. Larkey); and statistical approach (ex. N-Garm). However, no stemming of this language is perfect: The existing stemmers have a small efficiency. In this paper, in order to improve the accuracy of stemming and therefore the accuracy of our proposed TC system, an efficient hybrid method is proposed for stemming Arabic text. The effectiveness of the aforementioned four methods was evaluated and compared in term of the F-measure of the Naïve Bayesian classifier and the Support Vector Machine classifier used in our TC system. The proposed stemming algorithm was found to supersede the other stemming ones: The obtained results illustrate that using the proposed stemmer enhances greatly the performance of Arabic Text Categorization.",
"title": ""
},
{
"docid": "7a62e5a78eabbcbc567d5538a2f35434",
"text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt",
"title": ""
}
] |
scidocsrr
|
762b69459f5f9cbbb3e67b5bb6528518
|
Modellingof a special class of spherical parallel manipulators with Euler parameters
|
[
{
"docid": "8fa0c59e04193ff1375b3ed544847229",
"text": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "b427ebf5f9ce8af9383f74dc86819583",
"text": "This paper deals with the in-depth kinematic analysis of a special parallel wrist, called the agile eye. The agile eye is a three-legged spherical parallel robot with revolute joints, in which all pairs of adjacent joint axes are orthogonal. Its most peculiar feature, demonstrated in this paper for the first time, is that its workspace is unlimited and flawed only by six singularity curves (instead of surfaces). These curves correspond to self-motions of the mobile platform and of the legs, or to a lockup configuration. This paper also demonstrates that the four solutions to the direct kinematics of the agile eye (assembly modes) have a simple direct relationship with the eight solutions to the inverse kinematics (working modes)",
"title": ""
}
] |
[
{
"docid": "175fa180bc18a59dd6855d469aed91ec",
"text": "A new solution of the inverse kinematics task for a 3-DOF parallel manipulator with a R-P -S joint structure is obtained for a given position of end-effector in the form of simple position equations. Based on this the number of the inverse kinematics task solutions was investigated, in general, equal to four. We identify the size of the manipulator feasible area and simple relationships are found between the position and orientation of the platform. We prove a new theorem stating that, while the end-effector traces a circular horizontal path with its centre at the vertical z-axis, the norm of the joint coordinates vector remains constant.",
"title": ""
},
{
"docid": "d77a8c630e50ed2879cafba7367ed456",
"text": "A survey found the language in use in introductory programming classes in the top U.S. computer science schools.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "c7993af6bf01f8b35f5494e5a564d757",
"text": "Microservice Architectures (MA) have the potential to increase the agility of software development. In an era where businesses require software applications to evolve to support emerging software requirements, particularly for Internet of Things (IoT) applications, we examine the issue of microservice granularity and explore its effect upon application latency. Two approaches to microservice deployment are simulated; the first with microservices in a single container, and the second with microservices partitioned across separate containers. We observed a negligible increase in service latency for the multiple container deployment over a single container.",
"title": ""
},
{
"docid": "b0b84a9f7f694dd8d7e0deb1533c4de5",
"text": "Medical institutes use Electronic Medical Record (EMR) to record a series of medical events, including diagnostic information (diagnosis codes), procedures performed (procedure codes) and admission details. Plenty of data mining technologies are applied in the EMR data set for knowledge discovery, which is precious to medical practice. The knowledge found is conducive to develop treatment plans, improve health care and reduce medical expenses, moreover, it could also provide further assistance to predict and control outbreaks of epidemic disease. The growing social value it creates has made it a hot spot for experts and scholars. In this paper, we will summarize the research status of data mining technologies on EMR, and analyze the challenges that EMR research is confronting currently.",
"title": ""
},
{
"docid": "a78caf89bb51dca3a8a95f7736ae1b2b",
"text": "The understanding of sentences involves not only the retrieval of the meaning of single words, but the identification of the relation between a verb and its arguments. The way the brain manages to process word meaning and syntactic relations during language comprehension on-line still is a matter of debate. Here we review the different views discussed in the literature and report data from crucial experiments investigating the temporal and neurotopological parameters of different information types encoded in verbs, i.e. word category information, the verb's argument structure information, the verb's selectional restriction and the morphosyntactic information encoded in the verb's inflection. The neurophysiological indices of the processes dealing with these different information types suggest an initial independence of the processing of word category information from other information types as the basis of local phrase structure building, and a later processing stage during which different information types interact. The relative ordering of the subprocesses appears to be universal, whereas the absolute timing of when during later phrases interaction takes places varies as a function of when the relevant information becomes available. Moreover, the neurophysiological indices for non-local dependency relations vary as a function of the morphological richness of the language.",
"title": ""
},
{
"docid": "e680f8b83e7a2137321cc644724827de",
"text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.",
"title": ""
},
{
"docid": "a8553e9f90e8766694f49dcfdeab83b7",
"text": "The need for solid-state ac-dc converters to improve power quality in terms of power factor correction, reduced total harmonic distortion at input ac mains, and precisely regulated dc output has motivated the investigation of several topologies based on classical converters such as buck, boost, and buck-boost converters. Boost converters operating in continuous-conduction mode have become particularly popular because reduced electromagnetic interference levels result from their utilization. Within this context, this paper introduces a bridgeless boost converter based on a three-state switching cell (3SSC), whose distinct advantages are reduced conduction losses with the use of magnetic elements with minimized size, weight, and volume. The approach also employs the principle of interleaved converters, as it can be extended to a generic number of legs per winding of the autotransformers and high power levels. A literature review of boost converters based on the 3SSC is initially presented so that key aspects are identified. The theoretical analysis of the proposed converter is then developed, while a comparison with a conventional boost converter is also performed. An experimental prototype rated at 1 kW is implemented to validate the proposal, as relevant issues regarding the novel converter are discussed.",
"title": ""
},
{
"docid": "66a49a50b63892a857a40531630be800",
"text": "We present a neural network architecture applied to the problem of refining a dense disparity map generated by a stereo algorithm to which we have no access. Our approach is able to learn which disparity values should be modified and how, from a training set of images, estimated disparity maps and the corresponding ground truth. Its only input at test time is a disparity map and the reference image. Two design characteristics are critical for the success of our network: (i) it is formulated as a recurrent neural network, and (ii) it estimates the output refined disparity map as a combination of residuals computed at multiple scales, that is at different up-sampling and down-sampling rates. The first property allows the network, which we named RecResNet, to progressively improve the disparity map, while the second property allows the corrections to come from different scales of analysis, addressing different types of errors in the current disparity map. We present competitive quantitative and qualitative results on the KITTI 2012 and 2015 benchmarks that surpass the accuracy of previous disparity refinement methods.",
"title": ""
},
{
"docid": "76d1509549ba64157911e6b723f6ebc5",
"text": "A single-stage soft-switching converter is proposed for universal line voltage applications. A boost type of active-clamp circuit is used to achieve zero-voltage switching operation of the power switches. A simple DC-link voltage feedback scheme is applied to the proposed converter. A resonant voltage-doubler rectifier helps the output diodes to achieve zero-current switching operation. The reverse-recovery losses of the output diodes can be eliminated without any additional components. The DC-link capacitor voltage can be reduced, providing reduced voltage stresses of switching devices. Furthermore, power conversion efficiency can be improved by the soft-switching operation of switching devices. The performance of the proposed converter is evaluated on a 160-W (50 V/3.2 A) experimental prototype. The proposed converter complies with International Electrotechnical Commission (IEC) 1000-3-2 Class-D requirements for the light-emitting diode power supply of large-sized liquid crystal displays, maintaining the DC-link capacitor voltage within 400 V under the universal line voltage (90-265 Vrms).",
"title": ""
},
{
"docid": "63b283d40abcccd17b4771535ac000e4",
"text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.",
"title": ""
},
{
"docid": "1f4ff9d732b3512ee9b105f084edd3d2",
"text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "4e50e68e099ab77aedcb0abe8b7a9ca2",
"text": "In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.",
"title": ""
},
{
"docid": "54a35bf200d9af060ce38a9aec972f50",
"text": "The linear preferential attachment hypothesis has been shown to be quite successful in explaining the existence of networks with power-law degree distributions. It is then quite important to determine if this mechanism is the consequence of a general principle based on local rules. In this work it is claimed that an effective linear preferential attachment is the natural outcome of growing network models based on local rules. It is also shown that the local models offer an explanation for other properties like the clustering hierarchy and degree correlations recently observed in complex networks. These conclusions are based on both analytical and numerical results for different local rules, including some models already proposed in the literature.",
"title": ""
},
{
"docid": "e4dc1f30a914dc6f710f23b5bc047978",
"text": "Intelligence, expertise, ability and talent, as these terms have traditionally been used in education and psychology, are socially agreed upon labels that minimize the dynamic, evolving, and contextual nature of individual–environment relations. These hypothesized constructs can instead be described as functional relations distributed across whole persons and particular contexts through which individuals appear knowledgeably skillful. The purpose of this article is to support a concept of ability and talent development that is theoretically grounded in 5 distinct, yet interrelated, notions: ecological psychology, situated cognition, distributed cognition, activity theory, and legitimate peripheral participation. Although talent may be reserved by some to describe individuals possessing exceptional ability and ability may be described as an internal trait, in our description neither ability nor talent are possessed. Instead, they are treated as equivalent terms that can be used to describe functional transactions that are situated across person-in-situation. Further, and more important, by arguing that ability is part of the individual–environment transaction, we take the potential to appear talented out of the hands (or heads) of the few and instead treat it as an opportunity that is available to all although it may be actualized more frequently by some.",
"title": ""
},
{
"docid": "21197ea03a0c9ce6061ea524aca10b52",
"text": "Developers of gamified business applications face the challenge of creating motivating gameplay strategies and creative design techniques to deliver subject matter not typically associated with games in a playful way. We currently have limited models that frame what makes gamification effective (i.e., engaging people with a business application). Thus, we propose a design-centric model and analysis tool for gamification: The kaleidoscope of effective gamification. We take a look at current models of game design, self-determination theory and the principles of systems design to deconstruct the gamification layer in the design of these applications. Based on the layers of our model, we provide design guidelines for effective gamification of business applications.",
"title": ""
},
{
"docid": "2a58426989cbfab0be9e18b7ee272b0a",
"text": "Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%.",
"title": ""
},
{
"docid": "e68aac3565df039aa431bf2a69e27964",
"text": "region, a five-year-old girl with mild asthma presented to the emergency department of a children’s hospital in acute respiratory distress. She had an 11-day history of cough, rhinorrhea and progressive chest discomfort. She was otherwise healthy, with no history of severe respiratory illness, prior hospital admissions or immu nocompromise. Outside of infrequent use of salbutamol, she was not taking any medications, and her routine childhood immunizations, in cluding conjugate pneumococcal vaccine, were up to date. She had not received the pandemic influenza vaccine because it was not yet available for her age group. The patient had been seen previously at a community health centre a week into her symptoms, and a chest radiograph had shown perihi lar and peribronchial thickening but no focal con solidation, atelectasis or pleural effusion. She had then been reassessed 24 hours later at an influenza assessment centre and empirically started on oseltamivir. Two days later, with the onset of vomiting, diarrhea, fever and progressive shortness of breath, she was brought to the emergency department of the children’s hospital. On examination, she was in considerable distress; her heart rate was 170 beats/min, her respiratory rate was 60 breaths/min and her blood pressure was 117/57 mm Hg. Her oxygen saturations on room air were consistently 70%. On auscultation, she had decreased air entry to the right side with bronchial breath sounds. Repeat chest radiography showed almost complete opacification of the right hemithorax, air bronchograms in the middle and lower lobes, and minimal aeration to the apex. This was felt to be in keeping with whole lung consolidation and parapneumonic effusion. The left lung appeared normal. Blood tests done on admission showed a hemoglobin level of 122 (normal 110–140) g/L, a leukocyte count of 1.5 (normal 5.5–15.5) × 10/L (neutrophils 11% [normal 47%] and bands 19% [normal 5%]) and a platelet count of 92 (normal 217–533) × 10/L. Results of blood tests were otherwise unremarkable. Venous blood gas had a pH level of 7.32 (normal 7.35–7.42), partial pressure of carbon dioxide of 43 (normal 32– 43) mm Hg, a base deficit of 3.6 (normal –2 to 3) mmol/L, and a bicarbonate level of 21.8 (normal 21–26) mmol/L. The initial serum creatinine level was 43.0 (normal < 36) μmol/L and the urea level was 6.5 (normal 2.0–7.0) mmol/L, with no clinical evidence of renal dysfunction. Given the patient’s profound increased work of breathing, she was admitted to the intensive care unit (ICU), where intubation was required because of her continued decline over the next 24 hours. Blood cultures taken on admission were negative. Nasopharyngeal aspirates were negative on rapid respiratory viral testing, but antiviral treatment for presumed pandemic (H1N1) influenza was continued given her clinical presentation, the prevalence of pandemic influenza in the community and the low sensitivity of the test in the range of only 62%. Viral cultures were not done. Empiric treatment with intravenous cefotaxime (200 mg/kg/d) and vancomycin (40 mg/kg/d) was started in the ICU for broad antimicrobial coverage, including possible Cases",
"title": ""
},
{
"docid": "eef7ce5b4268054ed6c7de7fdbbf003e",
"text": "This paper proposes a new closed-loop synchronization algorithm, PLL (Phase-Locked Loop), for applications in power conditioner systems for single-phase networks. The structure presented is based on the correlation of the input signal with a complex signal generated from the use of an adaptive filter in a PLL algorithm in order to minimize the computational effort. Moreover, the adapted PLL presents a higher level of rejection for two particular disturbances: interharmonic and subharmonic, when compared to the original algorithm. Simulation and experimental results will be presented in order to prove the efficacy of the proposed adaptive algorithm.",
"title": ""
}
] |
scidocsrr
|
4c522ee75323641bcadf9828b7bb7acc
|
A Snapback Suppressed Reverse-Conducting IGBT With a Floating p-Region in Trench Collector
|
[
{
"docid": "1d6c4f6efccb211ced52dbed51b0be22",
"text": "In this paper, an advanced Reverse Conducting (RC) IGBT concept is presented. The new technology is referred to as the Bi-mode Insulated Gate Transistor (BIGT) implying that the device can operate at the same current densities in transistor (IGBT) mode and freewheeling diode mode by utilizing the same available silicon volume in both operational modes. The BIGT design concept differs from that of the standard RC-IGBT while targeting to fully replace the state-of-the-art two-chip IGBT/Diode approach with a single chip. The BIGT is also capable of improving the over-all performance especially under hard switching conditions.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
}
] |
[
{
"docid": "f437f971d7d553b69d438a469fd26d41",
"text": "This paper introduces a single-chip, 200 200element sensor array implemented in a standard two-metal digital CMOS technology. The sensor is able to grab the fingerprint pattern without any use of optical and mechanical adaptors. Using this integrated sensor, the fingerprint is captured at a rate of 10 F/s by pressing the finger skin onto the chip surface. The fingerprint pattern is sampled by capacitive sensors that detect the electric field variation induced by the skin surface. Several design issues regarding the capacitive sensing problem are reported and the feedback capacitive sensing scheme (FCS) is introduced. More specifically, the problem of the charge injection in MOS switches has been revisited for charge amplifier design.",
"title": ""
},
{
"docid": "07ce1301392e18c1426fd90507dc763f",
"text": "The fluorescent lamp lifetime is very dependent of the start-up lamp conditions. The lamp filament current and temperature during warm-up and at steady-state operation are important to extend the life of a hot-cathode fluorescent lamp, and the preheating circuit is responsible for attending to the start-up lamp requirements. The usual solution for the preheating circuit used in self-oscillating electronic ballasts is simple and presents a low cost. However, the performance to extend the lamp lifetime is not the most effective. This paper presents an effective preheating circuit for self-oscillating electronic ballasts as an alternative to the usual solution.",
"title": ""
},
{
"docid": "10b4d77741d40a410b30b0ba01fae67f",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: rollingthunder@optonline.net (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "58039fbc0550c720c4074c96e866c025",
"text": "We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term-that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.",
"title": ""
},
{
"docid": "91c57b7a9dd2555e92b5ffa1f5a21790",
"text": "This article presents suggestions for nurses to gain skill, competence, and comfort in caring for critically ill patients receiving mechanical ventilatory support, with a specific focus on education strategies and building communication skills with these challenging nonverbal patients. Engaging in evidence-based practice projects at the unit level and participating in or leading research studies are key ways nurses can contribute to improving outcomes for patients receiving mechanical ventilation. Suggestions are offered for evidence-based practice projects and possible research studies to improve outcomes and advance the science in an effort to achieve quality patient-ventilator management in intensive care units.",
"title": ""
},
{
"docid": "0a7673d423c9134fb96bb3bb5b286433",
"text": "In this contribution the development, design, fabrication and test of a highly integrated broadband multifunctional chip is presented. The MMIC covers the C-, X-and Ku- Band and it is suitable for applications in high performance Transmit/Receive Modules. In less than 26 mm2, the MMIC embeds several T/R switches, low noise/medium power amplifiers, a stepped phase shifter and analog/digital attenuators in order to perform the RF signal routing and phase/amplitude conditioning. Besides, an embedded serial-to-parallel converter drives the phase shifter and the digital attenuator leading to a reduction in complexity of the digital control interface.",
"title": ""
},
{
"docid": "655a95191700e24c6dcd49b827de4165",
"text": "With the increasing demand for express delivery, a courier needs to deliver many tasks in one day and it's necessary to deliver punctually as the customers expect. At the same time, they want to schedule the delivery tasks to minimize the total time of a courier's one-day delivery, considering the total travel time. However, most of scheduling researches on express delivery focus on inter-city transportation, and they are not suitable for the express delivery to customers in the “last mile”. To solve the issue above, this paper proposes a personalized service for scheduling express delivery, which not only satisfies all the customers' appointment time but also makes the total time minimized. In this service, personalized and accurate travel time estimation is important to guarantee delivery punctuality when delivering shipments. Therefore, the personalized scheduling service is designed to consist of two basic services: (1) personalized travel time estimation service for any path in express delivery using courier trajectories, (2) an express delivery scheduling service considering multiple factors, including customers' appointments, one-day delivery costs, etc., which is based on the accurate travel time estimation provided by the first service. We evaluate our proposed service based on extensive experiments, using GPS trajectories generated by more than 1000 couriers over a period of two months in Beijing. The results demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "95f57e37d04b6b3b8c9ce29ebf23d345",
"text": "Finite state machines (FSMs) are the backbone of sequential circuit design. In this paper, a new FSM watermarking scheme is proposed by making the authorship information a non-redundant property of the FSM. To overcome the vulnerability to state removal attack and minimize the design overhead, the watermark bits are seamlessly interwoven into the outputs of the existing and free transitions of state transition graph (STG). Unlike other transition-based STG watermarking, pseudo input variables have been reduced and made functionally indiscernible by the notion of reserved free literal. The assignment of reserved literals is exploited to minimize the overhead of watermarking and make the watermarked FSM fallible upon removal of any pseudo input variable. A direct and convenient detection scheme is also proposed to allow the watermark on the FSM to be publicly detectable. Experimental results on the watermarked circuits from the ISCAS'89 and IWLS'93 benchmark sets show lower or acceptably low overheads with higher tamper resilience and stronger authorship proof in comparison with related watermarking schemes for sequential functions.",
"title": ""
},
{
"docid": "6c11bb11540719ad64e98bb67cd9a798",
"text": "Opium poppy (Papaver somniferum) produces a large number of benzylisoquinoline alkaloids, including the narcotic analgesics morphine and codeine, and has emerged as one of the most versatile model systems to study alkaloid metabolism in plants. As summarized in this review, we have taken a holistic strategy—involving biochemical, cellular, molecular genetic, genomic, and metabolomic approaches—to draft a blueprint of the fundamental biological platforms required for an opium poppy cell to function as an alkaloid factory. The capacity to synthesize and store alkaloids requires the cooperation of three phloem cell types—companion cells, sieve elements, and laticifers—in the plant, but also occurs in dedifferentiated cell cultures. We have assembled an opium poppy expressed sequence tag (EST) database based on the attempted sequencing of more than 30,000 cDNAs from elicitor-treated cell culture, stem, and root libraries. Approximately 23,000 of the elicitor-induced cell culture and stem ESTs are represented on a DNA microarray, which has been used to examine changes in transcript profile in cultured cells in response to elicitor treatment, and in plants with different alkaloid profiles. Fourier transform-ion cyclotron resonance mass spectrometry and proton nuclear magnetic resonance mass spectroscopy are being used to detect corresponding differences in metabolite profiles. Several new genes involved in the biosynthesis and regulation of alkaloid pathways in opium poppy have been identified using genomic tools. A biological blueprint for alkaloid production coupled with the emergence of reliable transformation protocols has created an unprecedented opportunity to alter the chemical profile of the world’s most valuable medicinal plant.",
"title": ""
},
{
"docid": "a0ebefc5137a1973e1d1da2c478de57c",
"text": "This paper presents BOTTA, the first Arabic dialect chatbot. We explore the challenges of creating a conversational agent that aims to simulate friendly conversations using the Egyptian Arabic dialect. We present a number of solutions and describe the different components of the BOTTA chatbot. The BOTTA database files are publicly available for researchers working on Arabic chatbot technologies. The BOTTA chatbot is also publicly available for any users who want to chat with it online.",
"title": ""
},
{
"docid": "f651d8505f354fe0ad8e0866ca64e6e1",
"text": "Building on existing categorical accounts of natural language semantics, we propose a compositional distributional model of ambiguous meaning. Originally inspired by the high-level category theoretic language of quantum information protocols, the compositional, distributional categorical model provides a conceptually motivated procedure to compute the meaning of a sentence, given its grammatical structure and an empirical derivation of the meaning of its parts. Grammar is given a type-logical description in a compact closed category while the meaning of words is represented in a finite inner product space model. Since the category of finite-dimensional Hilbert spaces is also compact closed, the type-checking deduction process lifts to a concrete meaning-vector computation via a strong monoidal functor between the two categories. The advantage of reasoning with these structures is that grammatical composition admits an interpretation in terms of flow of meaning between words. Pushing the analogy with quantum mechanics further, we describe ambiguous words as statistical ensembles of unambiguous concepts and extend the semantics of the previous model to a category that supports probabilistic mixing. We introduce two different Frobenius algebras representing different ways of composing the meaning of words, and discuss their properties. We conclude with a range of applications to the case of definitions, including a meaning update rule that reconciles the meaning of an ambiguous word with that of its definition.",
"title": ""
},
{
"docid": "d5c57af0f7ab41921ddb92a5de31c33a",
"text": "This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.",
"title": ""
},
{
"docid": "be83224a853fd65808def16ff20e9c02",
"text": "Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascade’s initial burst, we demonstrate strong performance in predicting whether it will recur in the future.",
"title": ""
},
{
"docid": "5b50e84437dc27f5b38b53d8613ae2c7",
"text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "39b7ab83a6a0d75b1ec28c5ff485b98d",
"text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.",
"title": ""
},
{
"docid": "bfd57465a5d6f85fb55ffe13ef79f3a5",
"text": "We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "073eb81bbd654b90e6a7ffce608f8ea2",
"text": "OBJECTIVE\nTo examine factors associated with variation in the risk for type 2 diabetes in women with prior gestational diabetes mellitus (GDM).\n\n\nRESEARCH DESIGN AND METHODS\nWe conducted a systematic literature review of articles published between January 1965 and August 2001, in which subjects underwent testing for GDM and then testing for type 2 diabetes after delivery. We abstracted diagnostic criteria for GDM and type 2 diabetes, cumulative incidence of type 2 diabetes, and factors that predicted incidence of type 2 diabetes.\n\n\nRESULTS\nA total of 28 studies were examined. After the index pregnancy, the cumulative incidence of diabetes ranged from 2.6% to over 70% in studies that examined women 6 weeks postpartum to 28 years postpartum. Differences in rates of progression between ethnic groups was reduced by adjustment for various lengths of follow-up and testing rates, so that women appeared to progress to type 2 diabetes at similar rates after a diagnosis of GDM. Cumulative incidence of type 2 diabetes increased markedly in the first 5 years after delivery and appeared to plateau after 10 years. An elevated fasting glucose level during pregnancy was the risk factor most commonly associated with future risk of type 2 diabetes.\n\n\nCONCLUSIONS\nConversion of GDM to type 2 diabetes varies with the length of follow-up and cohort retention. Adjustment for these differences reveals rapid increases in the cumulative incidence occurring in the first 5 years after delivery for different racial groups. Targeting women with elevated fasting glucose levels during pregnancy may prove to have the greatest effect for the effort required.",
"title": ""
},
{
"docid": "1ebb46b4c9e32423417287ab26cae14b",
"text": "Two field studies explored the relationship between self-awareness and transgressive behavior. In the first study, 363 Halloween trick-or-treaters were instructed to only take one candy. Self-awareness induced by the presence of a mirror placed behind the candy bowl decreased transgression rates for children who had been individuated by asking them their name and address, but did not affect the behavior of children left anonymous. Self-awareness influenced older but not younger children. Naturally occurring standards instituted by the behavior of the first child to approach the candy bowl in each group were shown to interact with the experimenter's verbally stated standard. The behavior of 349 subjects in the second study replicated the findings in the first study. Additionally, when no standard was stated by the experimenter, children took more candy when not self-aware than when self-aware.",
"title": ""
}
] |
scidocsrr
|
80b19612fbeafc0b6aa6df7c466c8d11
|
Relative Camera Pose Estimation Using Convolutional Neural Networks
|
[
{
"docid": "4d7cbe7f5e854028277f0120085b8977",
"text": "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.",
"title": ""
}
] |
[
{
"docid": "a4d7596cfcd4a9133c5677a481c88cf0",
"text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "33b8012ae66f07c9de158f4c514c4e99",
"text": "Many mathematicians have a dismissive attitude towards paradoxes. This is unfortunate, because many paradoxes are rich in content, having connections with serious mathematical ideas as well as having pedagogical value in teaching elementary logical reasoning. An excellent example is the so-called “surprise examination paradox” (described below), which is an argument that seems at first to be too silly to deserve much attention. However, it has inspired an amazing variety of philosophical and mathematical investigations that have in turn uncovered links to Gödel’s incompleteness theorems, game theory, and several other logical paradoxes (e.g., the liar paradox and the sorites paradox). Unfortunately, most mathematicians are unaware of this because most of the literature has been published in philosophy journals.",
"title": ""
},
{
"docid": "91f20c48f5a4329260aadb87a0d8024c",
"text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.",
"title": ""
},
{
"docid": "c3dd3dd59afe491fcc6b4cd1e32c88a3",
"text": "The Semantic Web drives towards the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.",
"title": ""
},
{
"docid": "46ab85859bd3966b243db79696a236f0",
"text": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.",
"title": ""
},
{
"docid": "466bb7b70fc1c5973fbea3ade7ebd845",
"text": "High-speed and heavy-load stacking robot technology is a common key technique in nonferrous metallurgy areas. Specific layer stacking robot of aluminum ingot continuous casting production line, which has four-DOF, is designed in this paper. The kinematics model is built and studied in detail by D-H method. The transformation matrix method is utilized to solve the kinematics equation of robot. Mutual motion relations between each joint variables and the executive device of robot is got. The kinematics simulation of the robot is carried out via the ADAMS-software. The results of simulation verify the theoretical analysis and lay the foundation for following static and dynamic characteristics analysis of the robot.",
"title": ""
},
{
"docid": "ac0b562db18fac38663b210f599c2deb",
"text": "This paper proposes a fast and stable image-based modeling method which generates 3D models with high-quality face textures in a semi-automatic way. The modeler guides untrained users to quickly obtain 3D model data via several steps of simple user interface operations using predefined 3D primitives. The proposed method contains an iterative non-linear error minimization technique in the model estimation step with an error function based on finite line segments instead of infinite lines. The error corresponds to the difference between the observed structure and the predicted structure from current model parameters. Experimental results on real images validate the robustness and the accuracy of the algorithm. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "77cea98467305b9b3b11de8d3cec6ec2",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "e78d53a2790ac3b6011910f82cefaff9",
"text": "A two-dimensional crystal of molybdenum disulfide (MoS2) monolayer is a photoluminescent direct gap semiconductor in striking contrast to its bulk counterpart. Exfoliation of bulk MoS2 via Li intercalation is an attractive route to large-scale synthesis of monolayer crystals. However, this method results in loss of pristine semiconducting properties of MoS2 due to structural changes that occur during Li intercalation. Here, we report structural and electronic properties of chemically exfoliated MoS2. The metastable metallic phase that emerges from Li intercalation was found to dominate the properties of as-exfoliated material, but mild annealing leads to gradual restoration of the semiconducting phase. Above an annealing temperature of 300 °C, chemically exfoliated MoS2 exhibit prominent band gap photoluminescence, similar to mechanically exfoliated monolayers, indicating that their semiconducting properties are largely restored.",
"title": ""
},
{
"docid": "7e6b6f8bab3172457473d158960688a7",
"text": "BACKGROUND\nCancer is a leading cause of death worldwide. Given the complexity of caring work, recent studies have focused on the professional quality of life of oncology nurses. China, the world's largest developing country, faces heavy burdens of care for cancer patients. Chinese oncology nurses may be encountering the negative side of their professional life. However, studies in this field are scarce, and little is known about the prevalence and predictors of oncology nurses' professional quality of life.\n\n\nOBJECTIVES\nTo describe and explore the prevalence of predictors of professional quality of life (compassion fatigue, burnout and compassion satisfaction) among Chinese oncology nurses under the guidance of two theoretical models.\n\n\nDESIGN\nA cross-sectional design with a survey.\n\n\nSETTINGS\nTen tertiary hospitals and five secondary hospitals in Shanghai, China.\n\n\nPARTICIPANTS\nA convenience and cluster sample of 669 oncology nurses was used. All of the nurses worked in oncology departments and had over 1 year of oncology nursing experience. Of the selected nurses, 650 returned valid questionnaires that were used for statistical analyses.\n\n\nMETHODS\nThe participants completed the demographic and work-related questionnaire, the Chinese version of the Professional Quality of Life Scale for Nurses, the Chinese version of the Jefferson Scales of Empathy, the Simplified Coping Style Questionnaire, the Perceived Social Support Scale, and the Chinese Big Five Personality Inventory brief version. Descriptive statistics, t-tests, one-way analysis of variance, simple and multiple linear regressions were used to determine the predictors of the main research variables.\n\n\nRESULTS\nHigher compassion fatigue and burnout were found among oncology nurses who had more years of nursing experience, worked in secondary hospitals and adopted passive coping styles. Cognitive empathy, training and support from organizations were identified as significant protectors, and 'perspective taking' was the strongest predictor of compassion satisfaction, explaining 23.0% of the variance. Personality traits of openness and conscientiousness were positively associated with compassion satisfaction, while neuroticism was a negative predictor, accounting for 24.2% and 19.8% of the variance in compassion fatigue and burnout, respectively.\n\n\nCONCLUSIONS\nOncology care has unique features, and oncology nurses may suffer from more work-related stressors compared with other types of nurses. Various predictors can influence the professional quality of life, and some of these should be considered in the Chinese nursing context. The results may provide clues to help nurse administrators identify oncology nurses' vulnerability to compassion fatigue and burnout and develop comprehensive strategies to improve their professional quality of life.",
"title": ""
},
{
"docid": "a2fa1d74fcaa6891e1a43dca706015b0",
"text": "Smart meters have been deployed worldwide in recent years that enable real-time communications and networking capabilities in power distribution systems. Problematically, recent reports have revealed incidents of energy theft in which dishonest customers would lower their electricity bills (aka stealing electricity) by tampering with their meters. The physical attack can be extended to a network attack by means of false data injection (FDI). This paper is thus motivated to investigate the currently-studied FDI attack by introducing the combination sum of energy profiles (CONSUMER) attack in a coordinated manner on a number of customers' smart meters, which results in a lower energy consumption reading for the attacker and a higher reading for the others in a neighborhood. We propose a CONSUMER attack model that is formulated into one type of coin change problems, which minimizes the number of compromised meters subject to the equality of an aggregated load to evade detection. A hybrid detection framework is developed to detect anomalous and malicious activities by incorporating our proposed grid sensor placement algorithm with observability analysis to increase the detection rate. Our simulations have shown that the network observability and detection accuracy can be improved by means of grid-placed sensor deployment.",
"title": ""
},
{
"docid": "3e805d6724dc400d681b3b42393d5ebe",
"text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.",
"title": ""
},
{
"docid": "1d9361cffd8240f3b691c887def8e2f5",
"text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.",
"title": ""
},
{
"docid": "5f0157139bff33057625686b7081a0c8",
"text": "A novel MIC/MMIC compatible microstrip to waveguide transition for X band is presented. The transition has realized on novel low cost substrate and its main features are: wideband operation, low insertion loss and feeding without a balun directly by the microstrip line.",
"title": ""
},
{
"docid": "c85a26f1bccf3b28ca6a46c5312040e7",
"text": "This paper describes a novel compact design of a planar circularly polarized (CP) tag antenna for use in a ultrahigh frequency (UHF) radio frequency identification (RFID) system. Introducing the meander strip into the right-arm of the square-ring structure enables the measured half-power bandwidth of the proposed CP tag antenna to exceed 100 MHz (860–960 MHz), which includes the entire operating bandwidth of the global UHF RFID system. A 3-dB axial-ratio bandwidth of approximately 36 MHz (902–938 MHz) can be obtained, which is suitable for American (902–928 MHz), European (918–926 MHz), and Taiwanese UHF RFID (922–928 MHz) applications. Since the overall antenna dimensions are only <inline-formula> <tex-math notation=\"LaTeX\">$54\\times54$ </tex-math></inline-formula> mm<sup>2</sup>, the proposed tag antenna can be operated with a size that is 64% smaller than that of the tag antennas attached on the safety glass. With a bidirectional reading pattern, the measured reading distance is about 8.3 m. Favorable tag sensitivity is obtained across the desired frequency band.",
"title": ""
},
{
"docid": "efc341c0a3deb6604708b6db361bfba5",
"text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.",
"title": ""
},
{
"docid": "ceb66016a57a936d33675756ee2e7eed",
"text": "Detecting small vehicles in aerial images is a difficult job that can be challenging even for humans. Rotating objects, low resolution, small inter-class variability and very large images comprising complicated backgrounds render the work of photo-interpreters tedious and wearisome. Unfortunately even the best classical detection pipelines like Ren et al. [2015] cannot be used off-the-shelf with good results because they were built to process object centric images from day-to-day life with multi-scale vertical objects. In this work we build on the Faster R-CNN approach to turn it into a detection framework that deals appropriately with the rotation equivariance inherent to any aerial image task. This new pipeline (Faster Rotation Equivariant Regions CNN) gives, without any bells and whistles, state-of-the-art results on one of the most challenging aerial imagery datasets: VeDAI Razakarivony and Jurie [2015] and give good results w.r.t. the baseline Faster R-CNN on two others: Munich Leitloff et al. [2014] and GoogleEarth Heitz and Koller [2008].",
"title": ""
},
{
"docid": "b1b6e670f21479956d2bbe281c6ff556",
"text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "182c83e136dcc7f41c2d7a7a30321440",
"text": "Behavioral logs are traces of human behavior seen through the lenses of sensors that capture and record user activity. They include behavior ranging from low-level keystrokes to rich audio and video recordings. Traces of behavior have been gathered in psychology studies since the 1930s (Skinner, 1938 ), and with the advent of computerbased applications it became common practice to capture a variety of interaction behaviors and save them to log fi les for later analysis. In recent years, the rise of centralized, web-based computing has made it possible to capture human interactions with web services on a scale previously unimaginable. Largescale log data has enabled HCI researchers to observe how information diffuses through social networks in near real-time during crisis situations (Starbird & Palen, 2010 ), characterize how people revisit web pages over time (Adar, Teevan, & Dumais, 2008 ), and compare how different interfaces for supporting email organization infl uence initial uptake and sustained use (Dumais, Cutrell, Cadiz, Jancke, Sarin, & Robbins, 2003 ; Rodden & Leggett, 2010 ). In this chapter we provide an overview of behavioral log use in HCI. We highlight what can be learned from logs that capture people’s interactions with existing computer systems and from experiments that compare new, alternative systems. We describe how to design and analyze web experiments, and how to collect, clean and use log data responsibly. The goal of this chapter is to enable the reader to design log studies and to understand results from log studies that they read about. Understanding User Behavior Through Log Data and Analysis",
"title": ""
}
] |
scidocsrr
|
ce302b49c125828cb906ffec23da62d1
|
The critical hitch angle for jackknife avoidance during slow backing up of vehicle – trailer systems
|
[
{
"docid": "0a793374864ce2a8a723423a4f74759b",
"text": "Trailer reversing is a problem frequently considered in the literature, usually with fairly complex non-linear control theory based approaches. In this paper, we present a simple method for stabilizing a tractor-trailer system to a trajectory based on the notion of controlling the hitch-angle of the trailer rather than the steering angle of the tractor. The method is intuitive, provably stable, and shown to be viable through various experimental results conducted on our test platform, the CSIRO autonomous tractor.",
"title": ""
}
] |
[
{
"docid": "80ac2373b3a01ab0f1f2665f0e070aa4",
"text": "This paper presents an overview of the state of the art control strategies specifically designed to coordinate distributed energy storage (ES) systems in microgrids. Power networks are undergoing a transition from the traditional model of centralised generation towards a smart decentralised network of renewable sources and ES systems, organised into autonomous microgrids. ES systems can provide a range of services, particularly when distributed throughout the power network. The introduction of distributed ES represents a fundamental change for power networks, increasing the network control problem dimensionality and adding long time-scale dynamics associated with the storage systems’ state of charge levels. Managing microgrids with many small distributed ES systems requires new scalable control strategies that are robust to power network and communication network disturbances. This paper reviews the range of services distributed ES systems can provide, and the control challenges they introduce. The focus of this paper is a presentation of the latest decentralised, centralised and distributed multi-agent control strategies designed to coordinate distributed microgrid ES systems. Finally, multi-agent control with agents satisfying Wooldridge’s definition of intelligence is proposed as a promising direction for future research.",
"title": ""
},
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "34461f38c51a270e2f3b0d8703474dfc",
"text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "56245b600dd082439d2b1b2a2452a6b7",
"text": "The electric drive systems used in many industrial applications require higher performance, reliability, variable speed due to its ease of controllability. The speed control of DC motor is very crucial in applications where precision and protection are of essence. Purpose of a motor speed controller is to take a signal representing the required speed and to drive a motor at that speed. Microcontrollers can provide easy control of DC motor. Microcontroller based speed control system consist of electronic component, microcontroller and the LCD. In this paper, implementation of the ATmega8L microcontroller for speed control of DC motor fed by a DC chopper has been investigated. The chopper is driven by a high frequency PWM signal. Controlling the PWM duty cycle is equivalent to controlling the motor terminal voltage, which in turn adjusts directly the motor speed. This work is a practical one and high feasibility according to economic point of view and accuracy. In this work, development of hardware and software of the close loop dc motor speed control system have been explained and illustrated. The desired objective is to achieve a system with the constant speed at any load condition. That means motor will run at a fixed speed instead of varying with amount of load. KeywordsDC motor, Speed control, Microcontroller, ATmega8, PWM.",
"title": ""
},
{
"docid": "e08bc715d679ba0442883b4b0e481998",
"text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality",
"title": ""
},
{
"docid": "a937f479b462758a089ed23cfa5a0099",
"text": "The paper outlines the development of a large vocabulary continuous speech recognition (LVCSR) system for the Indonesian language within the Asian speech translation (A-STAR) project. An overview of the A-STAR project and Indonesian language characteristics will be briefly described. We then focus on a discussion of the development of Indonesian LVCSR, including data resources issues, acoustic modeling, language modeling, the lexicon, and accuracy of recognition. There are three types of Indonesian data resources: daily news, telephone application, and BTEC tasks, which are used in this project. They are available in both text and speech forms. The Indonesian speech recognition engine was trained using the clean speech of both daily news and telephone application tasks. The optimum performance achieved on the BTEC task was 92.47% word accuracy. 1 A-STAR Project Overview The A-STAR project is an Asian consortium that is expected to advance the state-of-the-art in multilingual man-machine interfaces in the Asian region. This basic infrastructure will accelerate the development of large-scale spoken language corpora in Asia and also facilitate the development of related fundamental information communication technologies (ICT), such as multi-lingual speech translation, Figure 1: Outline of future speech-technology services connecting each area in the Asian region through network. multi-lingual speech transcription, and multi-lingual information retrieval. These fundamental technologies can be applied to the human-machine interfaces of various telecommunication devices and services connecting Asian countries through the network using standardized communication protocols as outlined in Fig. 1. They are expected to create digital opportunities, improve our digital capabilities, and eliminate the digital divide resulting from the differences in ICT levels in each area. The improvements to borderless communication in the Asian region are expected to result in many benefits in everyday life including tourism, business, education, and social security. The project was coordinated together by the Advanced Telecommunication Research (ATR) and the National Institute of Information and Communications Technology (NICT) Japan in cooperation with several research institutes in Asia, such as the National Laboratory of Pattern Recognition (NLPR) in China, the Electronics and Telecommunication Research Institute (ETRI) in Korea, the Agency for the Assessment and Application Technology (BPPT) in Indonesia, the National Electronics and Computer Technology Center (NECTEC) in Thailand, the Center for Development of Advanced Computing (CDAC) in India, the National Taiwan University (NTU) in Taiwan. Partners are still being sought for other languages in Asia. More details about the A-STAR project can be found in (Nakamura et al., 2007). 2 Indonesian Language Characteristic The Indonesian language, or so-called Bahasa Indonesia, is a unified language formed from hundreds of languages spoken throughout the Indonesian archipelago. Compared to other languages, which have a high density of native speakers, Indonesian is spoken as a mother tongue by only 7% of the population, and more than 195 million people speak it as a second language with varying degrees of proficiency. There are approximately 300 ethnic groups living throughout 17,508 islands, speaking 365 native languages or no less than 669 dialects (Tan, 2004). At home, people speak their own language, such as Javanese, Sundanese or Balinese, even though almost everybody has a good understanding of Indonesian as they learn it in school. Although the Indonesian language is infused with highly distinctive accents from different ethnic languages, there are many similarities in patterns across the archipelago. Modern Indonesian is derived from the literary of the Malay dialect. Thus, it is closely related to the Malay spoken in Malaysia, Singapore, Brunei, and some other areas. Unlike the Chinese language, it is not a tonal language. Compared with European languages, Indonesian has a strikingly small use of gendered words. Plurals are often expressed by means of word repetition. It is also a member of the agglutinative language family, meaning that it has a complex range of prefixes and suffixes, which are attached to base words. Consequently, a word can become very long. More details on Indonesian characteristics can be found in (Sakti et al., 2004). 3 Indonesian Phoneme Set The Indonesian phoneme set is defined based on Indonesian grammar described in (Alwi et al., 2003). A full phoneme set contains 33 phoneme symbols in total, which consists of 10 vowels (including diphthongs), 22 consonants, and one silent symbol. The vowel articulation pattern of the Indonesian language, which indicates the first two resonances of the vocal tract, F1 (height) and F2 (backness), is shown in Fig. 2.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "e2b8dd31dad42e82509a8df6cf21df11",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "3ed6df057a32b9dcf243b5ac367b4912",
"text": "This paper presents advancements in induction motor endring design to overcome mechanical limitations and extend the operating speed range and joint reliability of induction machines. A novel endring design met the challenging mechanical requirements of this high speed, high temperature, power dense application, without compromising electrical performance. Analysis is presented of the advanced endring design features including a non uniform cross section, hoop stress relief cuts, and an integrated joint boss, which reduced critical stress concentrations, allowing operation under a broad speed and temperature design range. A generalized treatment of this design approach is presented comparing the concept results to conventional design techniques. Additionally, a low temperature joining process of the bar/end ring connection is discussed that provides the required joint strength without compromising the mechanical strength of the age hardened parent metals. A description of a prototype 2 MW, 15,000 rpm flywheel motor generator embodying this technology is presented",
"title": ""
},
{
"docid": "b3fd58901706f7cb3ed653572e634c78",
"text": "This paper presents visual analysis of eye state and head pose (HP) for continuous monitoring of alertness of a vehicle driver. Most existing approaches to visual detection of nonalert driving patterns rely either on eye closure or head nodding angles to determine the driver drowsiness or distraction level. The proposed scheme uses visual features such as eye index (EI), pupil activity (PA), and HP to extract critical information on nonalertness of a vehicle driver. EI determines if the eye is open, half closed, or closed from the ratio of pupil height and eye height. PA measures the rate of deviation of the pupil center from the eye center over a time period. HP finds the amount of the driver's head movements by counting the number of video segments that involve a large deviation of three Euler angles of HP, i.e., nodding, shaking, and tilting, from its normal driving position. HP provides useful information on the lack of attention, particularly when the driver's eyes are not visible due to occlusion caused by large head movements. A support vector machine (SVM) classifies a sequence of video segments into alert or nonalert driving events. Experimental results show that the proposed scheme offers high classification accuracy with acceptably low errors and false alarms for people of various ethnicity and gender in real road driving conditions.",
"title": ""
},
{
"docid": "d16114259da9edf0022e2a3774c5acf0",
"text": "The multivesicular body (MVB) pathway is responsible for both the biosynthetic delivery of lysosomal hydrolases and the downregulation of numerous activated cell surface receptors which are degraded in the lysosome. We demonstrate that ubiquitination serves as a signal for sorting into the MVB pathway. In addition, we characterize a 350 kDa complex, ESCRT-I (composed of Vps23, Vps28, and Vps37), that recognizes ubiquitinated MVB cargo and whose function is required for sorting into MVB vesicles. This recognition event depends on a conserved UBC-like domain in Vps23. We propose that ESCRT-I represents a conserved component of the endosomal sorting machinery that functions in both yeast and mammalian cells to couple ubiquitin modification to protein sorting and receptor downregulation in the MVB pathway.",
"title": ""
},
{
"docid": "e6cba9e178f568c402be7b25c4f0777f",
"text": "This paper is a tutorial introduction to the Viterbi Algorithm, this is reinforced by an example use of the Viterbi Algorithm in the area of error correction in communications channels. Some extensions to the basic algorithm are also discussed briefly. Some of the many application areas where the Viterbi Algorithm has been used are considered, including it's use in communications, target tracking and pattern recognition problems. A proposal for further research into the use of the Viterbi Algorithm in Signature Verification is then presented, and is the area of present research at the moment.",
"title": ""
},
{
"docid": "0397514e0d4a87bd8b59d9b317f8c660",
"text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.",
"title": ""
},
{
"docid": "03e1ede18dcc78409337faf265940a4d",
"text": "Epidermal thickness and its relationship to age, gender, skin type, pigmentation, blood content, smoking habits and body site is important in dermatologic research and was investigated in this study. Biopsies from three different body sites of 71 human volunteers were obtained, and thickness of the stratum corneum and cellular epidermis was measured microscopically using a preparation technique preventing tissue damage. Multiple regressions analysis was used to evaluate the effect of the various factors independently of each other. Mean (SD) thickness of the stratum corneum was 18.3 (4.9) microm at the dorsal aspect of the forearm, 11.0 (2.2) microm at the shoulder and 14.9 (3.4) microm at the buttock. Corresponding values for the cellular epidermis were 56.6 (11.5) microm, 70.3 (13.6) microm and 81.5 (15.7) microm, respectively. Body site largely explains the variation in epidermal thickness, but also a significant individual variation was observed. Thickness of the stratum corneum correlated positively to pigmentation (p = 0.0008) and negatively to the number of years of smoking (p < 0.0001). Thickness of the cellular epidermis correlated positively to blood content (P = 0.028) and was greater in males than in females (P < 0.0001). Epidermal thickness was not correlated to age or skin type.",
"title": ""
},
{
"docid": "910c8ca022db7b806565e1c16c4cfb6a",
"text": "Three di¡erent understandings of causation, each importantly shaped by the work of statisticians, are examined from the point of view of their value to sociologists: causation as robust dependence, causation as consequential manipulation, and causation as generative process. The last is favoured as the basis for causal analysis in sociology. It allows the respective roles of statistics and theory to be clari¢ed and is appropriate to sociology as a largely non-experimental social science in which the concept of action is central.",
"title": ""
},
{
"docid": "97ec7149cbaedc6af3a26030067e2dba",
"text": "Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.",
"title": ""
},
{
"docid": "2316e37df8796758c86881aaeed51636",
"text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.",
"title": ""
},
{
"docid": "791314f5cee09fc8e27c236018a0927f",
"text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations",
"title": ""
},
{
"docid": "7d7ea6239106f614f892701e527122e2",
"text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.",
"title": ""
}
] |
scidocsrr
|
b3560ff550f50e2f79dae2a24428fcbd
|
Energy-Efficient Indoor Localization of Smart Hand-Held Devices Using Bluetooth
|
[
{
"docid": "4c7d66d767c9747fdd167f1be793d344",
"text": "In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation.",
"title": ""
}
] |
[
{
"docid": "58fe53f045228772b3a04dc0de095970",
"text": "Heterogeneous systems, that marry CPUs and GPUs together in a range of configurations, are quickly becoming the design paradigm for today's platforms because of their impressive parallel processing capabilities. However, in many existing heterogeneous systems, the GPU is only treated as an accelerator by the CPU, working as a slave to the CPU master. But recently we are starting to see the introduction of a new class of devices and changes to the system runtime model, which enable accelerators to be treated as first-class computing devices. To support programmability and efficiency of heterogeneous programming, the HSA foundation introduced the Heterogeneous System Architecture (HSA), which defines a platform and runtime architecture that provides rich support for OpenCL 2.0 features including shared virtual memory, dynamic parallelism, and improved atomic operations. In this paper, we provide the first comprehensive study of OpenCL 2.0 and HSA 1.0 execution, considering OpenCL 1.2 as the baseline. For workloads, we develop a suite of OpenCL micro-benchmarks designed to highlight the features of these emerging standards and also utilize real-world applications to better understand their impact at an application level. To fully exercise the new features provided by the HSA model, we experiment with a producer-consumer algorithm and persistent kernels. We find that by using HSA signals, we can remove 92% of the overhead due to synchronous kernel launches. In our real-world applications, the OpenCL 2.0 runtime achieves up to a 1.2X speedup, while the HSA 1.0 runtime achieves a 2.7X speedup over OpenCL 1.2.",
"title": ""
},
{
"docid": "16be435a946f8ff5d8d084f77373a6f3",
"text": "Answer selection is a core component in any question-answering systems. It aims to select correct answer sentences for a given question from a pool of candidate sentences. In recent years, many deep learning methods have been proposed and shown excellent results for this task. However, these methods typically require extensive parameter (and hyper-parameter) tuning, which gives rise to efficiency issues for large-scale datasets, and potentially makes them less portable across new datasets and domains (as re-tuning is usually required). In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view. FastHybrid is a light-weight model that requires little tuning and adaptation across different domains. It combines a fast deep model (which will be introduced in the method section) with an initial information retrieval model to effectively and efficiently handle answer selection. We introduce a new efficient attention mechanism in the hybrid model and demonstrate its effectiveness on several QA datasets. Experimental results show that although the hybrid uses no training data, its accuracy is often on-par with supervised deep learning techniques, while significantly reducing training and tuning costs across different domains.",
"title": ""
},
{
"docid": "b6ab7ac8029950f85d412b90963e679d",
"text": "Adaptive traffic signal control system is needed to avoid traffic congestion that has many disadvantages. This paper presents an adaptive traffic signal control system using camera as an input sensor that providing real-time traffic data. Principal Component Analysis (PCA) is used to analyze and to classify object on video frame for detecting vehicles. Distributed Constraint Satisfaction Problem (DCSP) method determine the duration of each traffic signal, based on counted number of vehicles at each lane. The system is implemented in embedded systems using BeagleBoard™.",
"title": ""
},
{
"docid": "6c3be94fe73ef79d711ef5f8b9c789df",
"text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?",
"title": ""
},
{
"docid": "5454fbb1a924f3360a338c11a88bea89",
"text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.",
"title": ""
},
{
"docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "fef4383a5a06687636ba4001ab0e510c",
"text": "In this paper, a depth camera-based novel approach for human activity recognition is presented using robust depth silhouettes context features and advanced Hidden Markov Models (HMMs). During HAR framework, at first, depth maps are processed to identify human silhouettes from noisy background by considering frame differentiation constraints of human body motion and compute depth silhouette area for each activity to track human movements in a scene. From the depth silhouettes context features, temporal frames information are computed for intensity differentiation measurements, depth history features are used to store gradient orientation change in overall activity sequence and motion difference features are extracted for regional motion identification. Then, these features are processed by Principal component analysis for dimension reduction and kmean clustering for code generation to make better activity representation. Finally, we proposed a new way to model, train and recognize different activities using advanced HMM. Each activity has been chosen with the highest likelihood value. Experimental results show superior recognition rate, resulting up to the mean recognition of 57.69% over the state of the art methods for fifteen daily routine activities using IM-Daily Depth Activity dataset. In addition, MSRAction3D dataset also showed some promising results.",
"title": ""
},
{
"docid": "7a37df81ad70697549e6da33384b4f19",
"text": "Water scarcity is now one of the major global crises, which has affected many aspects of human health, industrial development and ecosystem stability. To overcome this issue, water desalination has been employed. It is a process to remove salt and other minerals from saline water, and it covers a variety of approaches from traditional distillation to the well-established reverse osmosis. Although current water desalination methods can effectively provide fresh water, they are becoming increasingly controversial due to their adverse environmental impacts including high energy intensity and highly concentrated brine waste. For millions of years, microorganisms, the masters of adaptation, have survived on Earth without the excessive use of energy and resources or compromising their ambient environment. This has encouraged scientists to study the possibility of using biological processes for seawater desalination and the field has been exponentially growing ever since. Here, the term biodesalination is offered to cover all of the techniques which have their roots in biology for producing fresh water from saline solution. In addition to reviewing and categorizing biodesalination processes for the first time, this review also reveals unexplored research areas in biodesalination having potential to be used in water treatment.",
"title": ""
},
{
"docid": "7f47a4b5152acf7e38d5c39add680f9d",
"text": "unit of computation and a processor a piece of physical hardware In addition to reading to and writing from local memory a process can send and receive messages by making calls to a library of message passing routines The coordinated exchange of messages has the e ect of synchronizing processes This can be achieved by the synchronous exchange of messages in which the sending operation does not terminate until the receive operation has begun A di erent form of synchronization occurs when a message is sent asynchronously but the receiving process must wait or block until the data arrives Processes can be mapped to physical processors in various ways the mapping employed does not a ect the semantics of a program In particular multiple processes may be mapped to a single processor The message passing model provides a mechanism for talking about locality data contained in the local memory of a process are close and other data are remote We now examine some other properties of the message passing programming model performance mapping independence and modularity",
"title": ""
},
{
"docid": "a33e8a616955971014ceea9da1e8fcbe",
"text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.",
"title": ""
},
{
"docid": "4f1070b988605290c1588918a716cef2",
"text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.",
"title": ""
},
{
"docid": "6921cd9c2174ca96ec0061ae2dd881eb",
"text": "Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games' popularity and dedicated fan base are testaments to the allure of novel social interactions granted to people by allowing them an alternative life as a new character and persona. In this paper we investigate the phenomenon of \"gender swapping,\" which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped. We also discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG quality and detecting online identity frauds.",
"title": ""
},
{
"docid": "44e5c86afbe3814ad718aa27880941c4",
"text": "This paper introduces genetic algorithms (GA) as a complete entity, in which knowledge of this emerging technology can be integrated together to form the framework of a design tool for industrial engineers. An attempt has also been made to explain “why’’ and “when” GA should be used as an optimization tool.",
"title": ""
},
{
"docid": "93a39df6ee080e359f50af46d02cdb71",
"text": "Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks. MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server. In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server. In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately. We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints. In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy. Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem. Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH.",
"title": ""
},
{
"docid": "28352c478552728dddf09a2486f6c63c",
"text": "Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.",
"title": ""
},
{
"docid": "c784bfbd522bb4c9908c3f90a31199fe",
"text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.",
"title": ""
},
{
"docid": "88e582927c4e4018cb4071eeeb6feff4",
"text": "While previous studies have correlated the Dark Triad traits (i.e., narcissism, psychopathy, and Machiavellianism) with a preference for short-term relationships, little research has addressed possible correlations with short-term relationship sub-types. In this online study using Amazon’s Mechanical Turk system (N = 210) we investigated the manner in which scores on the Dark Triad relate to the selection of different mating environments using a budget-allocation task. Overall, the Dark Triad were positively correlated with preferences for short-term relationships and negatively correlated with preferences for a long-term relationship. Specifically, narcissism was uniquely correlated with preferences for one-night stands and friends-with-benefits and psychopathy was uniquely correlated with preferences for bootycall relationships. Both narcissism and psychopathy were negatively correlated with preferences for serious romantic relationships. In mediation analyses, psychopathy partially mediated the sex difference in preferences for booty-call relationships and narcissism partially mediated the sex difference in preferences for one-night stands. In addition, the sex difference in preference for serious romantic relationships was partially mediated by both narcissism and psychopathy. It appears the Dark Triad traits facilitate the adoption of specific mating environments providing fit with people’s personality traits. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7ce79a08969af50c1712f0e291dd026c",
"text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.",
"title": ""
},
{
"docid": "9c1e518c80dfbf201291923c9c55f1fd",
"text": "Computation underlies the organization of cells into higher-order structures, for example during development or the spatial association of bacteria in a biofilm. Each cell performs a simple computational operation, but when combined with cell–cell communication, intricate patterns emerge. Here we study this process by combining a simple genetic circuit with quorum sensing to produce more complex computations in space. We construct a simple NOR logic gate in Escherichia coli by arranging two tandem promoters that function as inputs to drive the transcription of a repressor. The repressor inactivates a promoter that serves as the output. Individual colonies of E. coli carry the same NOR gate, but the inputs and outputs are wired to different orthogonal quorum-sensing ‘sender’ and ‘receiver’ devices. The quorum molecules form the wires between gates. By arranging the colonies in different spatial configurations, all possible two-input gates are produced, including the difficult XOR and EQUALS functions. The response is strong and robust, with 5- to >300-fold changes between the ‘on’ and ‘off’ states. This work helps elucidate the design rules by which simple logic can be harnessed to produce diverse and complex calculations by rewiring communication between cells.",
"title": ""
}
] |
scidocsrr
|
e9186d6222a2baf349f8ae3316689fdb
|
TWO What Does It Mean to be Biased : Motivated Reasoning and Rationality
|
[
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
}
] |
[
{
"docid": "5b01c2e7bba6ab1abdda9b1a23568d2a",
"text": "First, we theoretically analyze the MMD-based estimates. Our analysis establishes that, under some mild conditions, the estimate is statistically consistent. More importantly, it provides an upper bound on the error in the estimate in terms of intuitive geometric quantities like class separation and data spread. Next, we use the insights obtained from the theoretical analysis, to propose a novel convex formulation that automatically learns the kernel to be employed in the MMD-based estimation. We design an efficient cutting plane algorithm for solving this formulation. Finally, we empirically compare our estimator with several existing methods, and show significantly improved performance under varying datasets, class ratios, and training sizes.",
"title": ""
},
{
"docid": "e0c52b0fdf2d67bca4687b8060565288",
"text": "Large graph databases are commonly collected and analyzed in numerous domains. For reasons related to either space efficiency or for privacy protection (e.g., in the case of social network graphs), it sometimes makes sense to replace the original graph with a summary, which removes certain details about the original graph topology. However, this summarization process leaves the database owner with the challenge of processing queries that are expressed in terms of the original graph, but are answered using the summary. In this paper, we propose a formal semantics for answering queries on summaries of graph structures. At its core, our formulation is based on a random worlds model. We show that important graph-structure queries (e.g., adjacency, degree, and eigenvector centrality) can be answered efficiently and in closed form using these semantics. Further, based on this approach to query answering, we formulate three novel graph partitioning/compression problems. We develop algorithms for finding a graph summary that least affects the accuracy of query results, and we evaluate our proposed algorithms using both real and synthetic data.",
"title": ""
},
{
"docid": "dff09daea034a765b858bc6a457cb6a7",
"text": "We study the problem of automatically and efficiently generating itineraries for users who are on vacation. We focus on the common case, wherein the trip duration is more than a single day. Previous efficient algorithms based on greedy heuristics suffer from two problems. First, the itineraries are often unbalanced, with excellent days visiting top attractions followed by days of exclusively lower-quality alternatives. Second, the trips often re-visit neighborhoods repeatedly in order to cover increasingly low-tier points of interest. Our primary technical contribution is an algorithm that addresses both these problems by maximizing the quality of the worst day. We give theoretical results showing that this algorithm»s competitive factor is within a factor two of the guarantee of the best available algorithm for a single day, across many variations of the problem. We also give detailed empirical evaluations using two distinct datasets:(a) anonymized Google historical visit data and(b) Foursquare public check-in data. We show first that the overall utility of our itineraries is almost identical to that of algorithms specifically designed to maximize total utility, while the utility of the worst day of our itineraries is roughly twice that obtained from other approaches. We then turn to evaluation based on human raters who score our itineraries only slightly below the itineraries created by human travel experts with deep knowledge of the area.",
"title": ""
},
{
"docid": "911ca70346689d6ba5fd01b1bc964dbe",
"text": "We present a novel texture compression scheme, called iPACKMAN, targeted for hardware implementation. In terms of image quality, it outperforms the previous de facto standard texture compression algorithms in the majority of all cases that we have tested. Our new algorithm is an extension of the PACKMAN texture compression system, and while it is a bit more complex than PACKMAN, it is still very low in terms of hardware complexity.",
"title": ""
},
{
"docid": "f2daa3fd822be73e3663520cc6afe741",
"text": "Low health literacy (LHL) remains a formidable barrier to improving health care quality and outcomes. Given the lack of precision of single demographic characteristics to predict health literacy, and the administrative burden and inability of existing health literacy measures to estimate health literacy at a population level, LHL is largely unaddressed in public health and clinical practice. To help overcome these limitations, we developed two models to estimate health literacy. We analyzed data from the 2003 National Assessment of Adult Literacy (NAAL), using linear regression to predict mean health literacy scores and probit regression to predict the probability of an individual having ‘above basic’ proficiency. Predictors included gender, age, race/ethnicity, educational attainment, poverty status, marital status, language spoken in the home, metropolitan statistical area (MSA) and length of time in U.S. All variables except MSA were statistically significant, with lower educational attainment being the strongest predictor. Our linear regression model and the probit model accounted for about 30% and 21% of the variance in health literacy scores, respectively, nearly twice as much as the variance accounted for by either education or poverty alone. Multivariable models permit a more accurate estimation of health literacy than single predictors. Further, such models can be applied to readily available administrative or census data to produce estimates of average health literacy and identify communities that would benefit most from appropriate, targeted interventions in the clinical setting to address poor quality care and outcomes related to LHL.",
"title": ""
},
{
"docid": "cc9de768281e58749cd073d25a97d39c",
"text": "The Dynamic Adaptive Streaming over HTTP (referred as MPEG DASH) standard is designed to provide high quality of media content over the Internet delivered from conventional HTTP web servers. The visual content, divided into a sequence of segments, is made available at a number of different bitrates so that an MPEG DASH client can automatically select the next segment to download and play back based on current network conditions. The task of transcoding media content to different qualities and bitrates is computationally expensive, especially in the context of large-scale video hosting systems. Therefore, it is preferably executed in a powerful cloud environment, rather than on the source computer (which may be a mobile device with limited memory, CPU speed and battery life). In order to support the live distribution of media events and to provide a satisfactory user experience, the overall processing delay of videos should be kept to a minimum. In this paper, we propose a novel dynamic scheduling methodology on video transcoding for MPEG DASH in a cloud environment, which can be adapted to different applications. The designed scheduler monitors the workload on each processor in the cloud environment and selects the fastest processors to run high-priority jobs. It also adjusts the video transcoding mode (VTM) according to the system load. Experimental results show that the proposed scheduler performs well in terms of the video completion time, system load balance, and video playback smoothness.",
"title": ""
},
{
"docid": "7eba71bb191a31bd87cd9d2678a7b860",
"text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.",
"title": ""
},
{
"docid": "4cd1eeb516d602390703b66d3201a9dc",
"text": "A thorough understanding of the orbit, structures within it, and complex spatial relationships among these structures bears relevance in a variety of neurosurgical cases. We describe the 3-dimensional surgical anatomy of the orbit and fragile and complex network of neurovascular architectures, flanked by a series of muscular and glandular structures, found within the orbital dura.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "666137f1b598a25269357d6926c0b421",
"text": "representation techniques. T he World Wide Web is possible because a set of widely established standards guarantees interoperability at various levels. Until now, the Web has been designed for direct human processing, but the next-generation Web, which Tim Berners-Lee and others call the “Semantic Web,” aims at machine-processible information.1 The Semantic Web will enable intelligent services—such as information brokers, search agents, and information filters—which offer greater functionality and interoperability than current stand-alone services. The Semantic Web will only be possible once further levels of interoperability have been established. Standards must be defined not only for the syntactic form of documents, but also for their semantic content. Notable among recent W3C standardization efforts are XML/XML schema and RDF/RDF schema, which facilitate semantic interoperability. In this article, we explain the role of ontologies in the architecture of the Semantic Web. We then briefly summarize key elements of XML and RDF, showing why using XML as a tool for semantic interoperability will be ineffective in the long run. We argue that a further representation and inference layer is needed on top of the Web’s current layers, and to establish such a layer, we propose a general method for encoding ontology representation languages into RDF/RDF schema. We illustrate the extension method by applying it to Ontology Interchange Language (OIL), an ontology representation and inference language.2",
"title": ""
},
{
"docid": "bc28f28d21605990854ac9649d244413",
"text": "Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.",
"title": ""
},
{
"docid": "6fcfbe651d6c4f3a47bf07ee7d38eee2",
"text": "\"People-nearby applications\" (PNAs) are a form of ubiquitous computing that connect users based on their physical location data. One example is Grindr, a popular PNA that facilitates connections among gay and bisexual men. Adopting a uses and gratifications approach, we conducted two studies. In study one, 63 users reported motivations for Grindr use through open-ended descriptions. In study two, those descriptions were coded into 26 items that were completed by 525 Grindr users. Factor analysis revealed six uses and gratifications: social inclusion, sex, friendship, entertainment, romantic relationships, and location-based search. Two additional analyses examine (1) the effects of geographic location (e.g., urban vs. suburban/rural) on men's use of Grindr and (2) how Grindr use is related to self-disclosure of information. Results highlight how the mixed-mode nature of PNA technology may change the boundaries of online and offline space, and how gay and bisexual men navigate physical environments.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "96c14e4c9082920edb835e85ce99dc21",
"text": "When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting leaks personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims' handwriting, and to extract handwriting-specific features for machine learning based analysis. WritingHacker focuses on the situation where the victim's handwriting follows certain print style. An attacker can keep a mobile device, such as a common smart-phone, touching the desk used by the victim to record the audio signals of handwriting. Then the system can provide a word-level estimate for the content of the handwriting. To reduce the impacts of various writing habits and writing locations, the system utilizes the methods of letter clustering and dictionary filtering. Our prototype system's experimental results show that the accuracy of word recognition reaches around 50% - 60% under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.",
"title": ""
},
{
"docid": "f93e72b45a185e06d03d15791d312021",
"text": "BACKGROUND\nAbnormal scar development following burn injury can cause substantial physical and psychological distress to children and their families. Common burn scar prevention and management techniques include silicone therapy, pressure garment therapy, or a combination of both. Currently, no definitive, high-quality evidence is available for the effectiveness of topical silicone gel or pressure garment therapy for the prevention and management of burn scars in the paediatric population. Thus, this study aims to determine the effectiveness of these treatments in children.\n\n\nMETHODS\nA randomised controlled trial will be conducted at a large tertiary metropolitan children's hospital in Australia. Participants will be randomised to one of three groups: Strataderm® topical silicone gel only, pressure garment therapy only, or combined Strataderm® topical silicone gel and pressure garment therapy. Participants will include 135 children (45 per group) up to 16 years of age who are referred for scar management for a new burn. Children up to 18 years of age will also be recruited following surgery for burn scar reconstruction. Primary outcomes are scar itch intensity and scar thickness. Secondary outcomes include scar characteristics (e.g. colour, pigmentation, pliability, pain), the patient's, caregiver's and therapist's overall opinion of the scar, health service costs, adherence, health-related quality of life, treatment satisfaction and adverse effects. Measures will be completed on up to two sites per person at baseline and 1 week post scar management commencement, 3 months and 6 months post burn, or post burn scar reconstruction. Data will be analysed using descriptive statistics and univariate and multivariate regression analyses.\n\n\nDISCUSSION\nResults of this study will determine the effectiveness of three noninvasive scar interventions in children at risk of, and with, scarring post burn or post reconstruction.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry, ACTRN12616001100482 . Registered on 5 August 2016.",
"title": ""
},
{
"docid": "4a2de9235a698a3b5e517446088d2ac6",
"text": "In recent years, there has been a growing interest in designing multi-robot systems (hereafter MRSs) to provide cost effective, fault-tolerant and reliable solutions to a variety of automated applications. Here, we review recent advancements in MRSs specifically designed for cooperative object transport, which requires the members of MRSs to coordinate their actions to transport objects from a starting position to a final destination. To achieve cooperative object transport, a wide range of transport, coordination and control strategies have been proposed. Our goal is to provide a comprehensive summary for this relatively heterogeneous and fast-growing body of scientific literature. While distilling the information, we purposefully avoid using hierarchical dichotomies, which have been traditionally used in the field of MRSs. Instead, we employ a coarse-grain approach by classifying each study based on the transport strategy used; pushing-only, grasping and caging. We identify key design constraints that may be shared among these studies despite considerable differences in their design methods. In the end, we discuss several open challenges and possible directions for future work to improve the performance of the current MRSs. Overall, we hope to increasethe visibility and accessibility of the excellent studies in the field and provide a framework that helps the reader to navigate through them more effectively.",
"title": ""
},
{
"docid": "e7c2134b446c4e0e7343ea8812673597",
"text": "Lexical embeddings can serve as useful representations for words for a variety of NLP tasks, but learning embeddings for phrases can be challenging. While separate embeddings are learned for each word, this is infeasible for every phrase. We construct phrase embeddings by learning how to compose word embeddings using features that capture phrase structure and context. We propose efficient unsupervised and task-specific learning objectives that scale our model to large datasets. We demonstrate improvements on both language modeling and several phrase semantic similarity tasks with various phrase lengths. We make the implementation of our model and the datasets available for general use.",
"title": ""
},
{
"docid": "0a2be958c7323d3421304d1613421251",
"text": "Stock price forecasting has aroused great concern in research of economy, machine learning and other fields. Time series analysis methods are usually utilized to deal with this task. In this paper, we propose to combine news mining and time series analysis to forecast inter-day stock prices. News reports are automatically analyzed with text mining techniques, and then the mining results are used to improve the accuracy of time series analysis algorithms. The experimental result on a half year Chinese stock market data indicates that the proposed algorithm can help to improve the performance of normal time series analysis in stock price forecasting significantly. Moreover, the proposed algorithm also performs well in stock price trend forecasting.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "269c1cb7fe42fd6403733fdbd9f109e3",
"text": "Myofibroblasts are the key players in extracellular matrix remodeling, a core phenomenon in numerous devastating fibrotic diseases. Not only in organ fibrosis, but also the pivotal role of myofibroblasts in tumor progression, invasion and metastasis has recently been highlighted. Myofibroblast targeting has gained tremendous attention in order to inhibit the progression of incurable fibrotic diseases, or to limit the myofibroblast-induced tumor progression and metastasis. In this review, we outline the origin of myofibroblasts, their general characteristics and functions during fibrosis progression in three major organs: liver, kidneys and lungs as well as in cancer. We will then discuss the state-of-the art drug targeting technologies to myofibroblasts in context of the above-mentioned organs and tumor microenvironment. The overall objective of this review is therefore to advance our understanding in drug targeting to myofibroblasts, and concurrently identify opportunities and challenges for designing new strategies to develop novel diagnostics and therapeutics against fibrosis and cancer.",
"title": ""
}
] |
scidocsrr
|
d2948c21194cbc2254fd8603d3702a81
|
RaptorX-Property: a web server for protein structure property prediction
|
[
{
"docid": "44bd234a8999260420bb2a07934887af",
"text": "T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was",
"title": ""
},
{
"docid": "5a1f4efc96538c1355a2742f323b7a0e",
"text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.",
"title": ""
}
] |
[
{
"docid": "f1e5e00fe3a0610c47918de526e87dc6",
"text": "The current paper reviews research that has explored the intergenerational effects of the Indian Residential School (IRS) system in Canada, in which Aboriginal children were forced to live at schools where various forms of neglect and abuse were common. Intergenerational IRS trauma continues to undermine the well-being of today's Aboriginal population, and having a familial history of IRS attendance has also been linked with more frequent contemporary stressor experiences and relatively greater effects of stressors on well-being. It is also suggested that familial IRS attendance across several generations within a family appears to have cumulative effects. Together, these findings provide empirical support for the concept of historical trauma, which takes the perspective that the consequences of numerous and sustained attacks against a group may accumulate over generations and interact with proximal stressors to undermine collective well-being. As much as historical trauma might be linked to pathology, it is not possible to go back in time to assess how previous traumas endured by Aboriginal peoples might be related to subsequent responses to IRS trauma. Nonetheless, the currently available research demonstrating the intergenerational effects of IRSs provides support for the enduring negative consequences of these experiences and the role of historical trauma in contributing to present day disparities in well-being.",
"title": ""
},
{
"docid": "c38dc288a59e39785dfa87f46d2371e5",
"text": "Silver molybdate (Ag2MoO4) and silver tungstate (Ag2WO4) nanomaterials were prepared using two complementary methods, microwave assisted hydrothermal synthesis (MAH) (pH 7, 140 °C) and coprecipitation (pH 4, 70 °C), and were then used to prepare two core/shell composites, namely α-Ag2WO4/β-Ag2MoO4 (MAH, pH 4, 140 °C) and β-Ag2MoO4/β-Ag2WO4 (coprecipitation, pH 4, 70 °C). The shape and size of the microcrystals were observed by field emission scanning electron microscopy (FE-SEM), different morphologies such as balls and nanorods. These powders were characterized by X-ray powder diffraction and UV-vis (diffuse reflectance and photoluminescence). X-ray diffraction patterns showed that the Ag2MoO4 samples obtained by the two methods were single-phased and belonged to the β-Ag2MoO4 structure (spinel type). In contrast, the Ag2WO4 obtained in the two syntheses were structurally different: MAH exhibited the well-known tetrameric stable structure α-Ag2WO4, while coprecipitation afforded the metastable β-Ag2WO4 allotrope, coexisting with a weak amount of the α-phase. The optical gap of β-Ag2WO4 (3.3 eV) was evaluated for the first time. In contrast to β-Ag2MoO4/β-Ag2WO4, the αAg2WO4/β-Ag2MoO4 exhibited strongly-enhanced photoluminescence in the low-energy band (650 nm), tentatively explained by the creation of a large density of local defects (distortions) at the core-shell interface, due to the presence of two different types of MOx polyhedra in the two structures.",
"title": ""
},
{
"docid": "d8938884a61e7c353d719dbbb65d00d0",
"text": "Image encryption plays an important role to ensure confidential transmission and storage of image over internet. However, a real–time image encryption faces a greater challenge due to large amount of data involved. This paper presents a review on image encryption techniques of both full encryption and partial encryption schemes in spatial, frequency and hybrid domains.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "c00c6539b78ed195224063bcff16fb12",
"text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.",
"title": ""
},
{
"docid": "d6707c10e68dcbb5cde0920631bdaf8b",
"text": "Game playing has been an important testbed for artificial intelligence. Board games, first-person shooters, and real-time strategy games have well-defined win conditions and rely on strong feedback from a simulated environment. Text adventures require natural language understanding to progress through the game but still have an underlying simulated environment. In this paper, we propose tabletop roleplaying games as a challenge due to an infinite action space, multiple (collaborative) players and models of the world, and no explicit reward signal. We present an approach for reinforcement learning agents that can play tabletop roleplaying games.",
"title": ""
},
{
"docid": "5411326f95abd20a141ad9e9d3ff72bf",
"text": "media files and almost universal use of email, information sharing is almost instantaneous anywhere in the world. Because many of the procedures performed in dentistry represent established protocols that should be read, learned and then practiced, it becomes clear that photography aids us in teaching or explaining to our patients what we think are common, but to them are complex and mysterious procedures. Clinical digital photography. Part 1: Equipment and basic documentation",
"title": ""
},
{
"docid": "ce174b6dce6e2dee62abca03b4a95112",
"text": "This article proposes a novel framework for representing and measuring local coherence. Central to this approach is the entity-grid representation of discourse, which captures patterns of entity distribution in a text. The algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional, syntactic, and referential information about discourse entities. We re-conceptualize coherence assessment as a learning task and show that our entity-based representation is well-suited for ranking-based generation and text classification tasks. Using the proposed representation, we achieve good performance on text ordering, summary coherence evaluation, and readability assessment.",
"title": ""
},
{
"docid": "3f33882e4bece06e7a553eb9133f8aa9",
"text": "Research on the relationship between affect and cognition in Artificial Intelligence in Education (AIEd) brings an important dimension to our understanding of how learning occurs and how it can be facilitated. Emotions are crucial to learning, but their nature, the conditions under which they occur, and their exact impact on learning for different learners in diverse contexts still needs to be mapped out. The study of affect during learning can be challenging, because emotions are subjective, fleeting phenomena that are often difficult for learners to report accurately and for observers to perceive reliably. Context forms an integral part of learners’ affect and the study thereof. This review provides a synthesis of the current knowledge elicitation methods that are used to aid the study of learners’ affect and to inform the design of intelligent technologies for learning. Advantages and disadvantages of the specific methods are discussed along with their respective potential for enhancing research in this area, and issues related to the interpretation of data that emerges as the result of their use. References to related research are also provided together with illustrative examples of where the individual methods have been used in the past. Therefore, this review is intended as a resource for methodological decision making for those who want to study emotions and their antecedents in AIEd contexts, i.e. where the aim is to inform the design and implementation of an intelligent learning environment or to evaluate its use and educational efficacy.",
"title": ""
},
{
"docid": "cd877197b06304b379d5caf9b5b89d30",
"text": "Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral \"correlates.\" As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of \"natural experiments,\" such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.",
"title": ""
},
{
"docid": "0e521af53f9faf4fee38843a22ec2185",
"text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.",
"title": ""
},
{
"docid": "fb4630a6b558ac9b8d8444275e1978e3",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "12a8d007ca4dce21675ddead705c7b62",
"text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.",
"title": ""
},
{
"docid": "cb70ab2056242ca739adde4751fbca2c",
"text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "545509f9e3aa65921a7d6faa41247ae6",
"text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.",
"title": ""
},
{
"docid": "38f289b085f2c6e2d010005f096d8fd7",
"text": "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.",
"title": ""
},
{
"docid": "7d14bd767964cba3cfc152ee20c7ffbc",
"text": "Most typical statistical and machine learning approaches to time series modeling optimize a singlestep prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a “demonstrator” by providing corrections for the errors made during multi-step prediction. By this reduction of multistep time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We present experimental results of our method, DAD, and show significant improvement over the traditional approach in two notably different domains, dynamic system modeling and video texture prediction. Determining models for time series data is important in applications ranging from market prediction to the simulation of chemical processes and robotic systems. Many supervised learning approaches have been proposed for this task, such as neural networks (Narendra and Parthasarathy 1990), Expectation-Maximization (Ghahramani and Roweis 1999; Coates, Abbeel, and Ng 2008), Support Vector Regression (Müller, Smola, and Rätsch 1997), Gaussian process regression (Wang, Hertzmann, and Blei 2005; Ko et al. 2007), Nadaraya-Watson kernel regression (Basharat and Shah 2009), Gaussian mixture models (Khansari-Zadeh and Billard 2011), and Kernel PCA (Ralaivola and D’Alche-Buc 2004). Common to most of these methods is that the objective being optimized is the single-step prediction loss. However, this criterion does not guarantee accurate multiple-step simulation accuracy in which the output of a prediction step is used as input for the next inference. The prevalence of single-step modeling approaches is a result of the difficulty in directly optimizing the multipleCopyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. step prediction error. As an example, consider fitting a simple linear dynamical system model for the multi-step error over the time horizon T from an initial condition x0,",
"title": ""
},
{
"docid": "dd3781fe97c7dd935948c55584313931",
"text": "The radiation of RFID antitheft gate system has been simulated in FEKO. The obtained numerical results for the electric field and magnetic field have been compared to the exposure limits proposed by the ICNIRP Guidelines. No significant violation of limits, regarding both occupational and public exposure, has been shown.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] |
scidocsrr
|
e1340c9d28265bce016b4422fc1d0ecc
|
Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto
|
[
{
"docid": "931e6f034abd1a3004d021492382a47a",
"text": "SARSA (Sutton, 1996) is applied to a simulated, traac-light control problem (Thorpe, 1997) and its performance is compared with several, xed control strategies. The performance of SARSA with four diierent representations of the current state of traac is analyzed using two reinforcement schemes. Training on one intersection is compared to, and is as eeective as training on all intersections in the environment. SARSA is shown to be better than xed-duration light timing and four-way stops for minimizing total traac travel time, individual vehicle travel times, and vehicle wait times. Comparisons of performance using a constant reinforcement function versus a variable reinforcement function dependent on the number of vehicles at an intersection showed that the variable reinforcement resulted in slightly improved performance for some cases.",
"title": ""
}
] |
[
{
"docid": "7933e531385d90a6b485abe155f06e3a",
"text": "We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure. For which we obtain generalization error guarantees and derive an optimization algorithm based on the Fenchel dual representation. Experiments on real-world datasets from the application domains of computational biology and computer vision show that convex localized multiple kernel learning can achieve higher prediction accuracies than its global and non-convex local counterparts.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "7210c2e82441b142f722bcc01bfe9aca",
"text": "In the beginning of the last decade, agile methodologies emerged as a response to software development processes that were based on rigid approaches. In fact, the flexible characteristics of agile methods are expected to be suitable to the less-defined and uncertain nature of software development. However, many studies in this area lack empirical evaluation in order to provide more confident evidences about which contexts the claims are true. This paper reports an empirical study performed to analyze the impact of Scrum adoption on customer satisfaction as an external success perspective for software development projects in a software intensive organization. The study uses data from real-life projects executed in a major software intensive organization located in a nation wide software ecosystem. The empirical method applied was a cross-sectional survey using a sample of 19 real-life software development projects involving 156 developers. The survey aimed to determine whether there is any impact on customer satisfaction caused by the Scrum adoption. However, considering that sample, our results indicate that it was not possible to establish any evidence that using Scrum may help to achieve customer satisfaction and, consequently, increase the success rates in software projects, in contrary to general claims made by Scrum's advocates.",
"title": ""
},
{
"docid": "7c5d0139d729ad6f90332a9d1cd28f70",
"text": "Cloud based ERP system architecture provides solutions to all the difficulties encountered by conventional ERP systems. It provides flexibility to the existing ERP systems and improves overall efficiency. This paper aimed at comparing the performance traditional ERP systems with cloud base ERP architectures. The challenges before the conventional ERP implementations are analyzed. All the main aspects of an ERP systems are compared with cloud based approach. The distinct advantages of cloud ERP are explained. The difficulties in cloud architecture are also mentioned.",
"title": ""
},
{
"docid": "cec6b4d1e547575a91bdd7e852ecbc3c",
"text": "The apps installed on a smartphone can reveal much information about a user, such as their medical conditions, sexual orientation, or religious beliefs. In addition, the presence or absence of particular apps on a smartphone can inform an adversary, who is intent on attacking the device. In this paper, we show that a passive eavesdropper can feasibly identify smartphone apps by fingerprinting the network traffic that they send. Although SSL/TLS hides the payload of packets, side-channel data, such as packet size and direction is still leaked from encrypted connections. We use machine learning techniques to identify smartphone apps from this side-channel data. In addition to merely fingerprinting and identifying smartphone apps, we investigate how app fingerprints change over time, across devices, and across different versions of apps. In addition, we introduce strategies that enable our app classification system to identify and mitigate the effect of ambiguous traffic, i.e., traffic in common among apps, such as advertisement traffic. We fully implemented a framework to fingerprint apps and ran a thorough set of experiments to assess its performance. We fingerprinted 110 of the most popular apps in the Google Play Store and were able to identify them six months later with up to 96% accuracy. Additionally, we show that app fingerprints persist to varying extents across devices and app versions.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "b6fdde5d6baeb546fd55c749af14eec1",
"text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.",
"title": ""
},
{
"docid": "4e23da50d4f1f0c4ecdbbf5952290c98",
"text": "[Context and motivation] User stories are an increasingly popular textual notation to capture requirements in agile software development. [Question/Problem] To date there is no scientific evidence on the effectiveness of user stories. The goal of this paper is to explore how practicioners perceive this artifact in the context of requirements engineering. [Principal ideas/results] We explore perceived effectiveness of user stories by reporting on a survey with 182 responses from practitioners and 21 follow-up semi-structured interviews. The data shows that practitioners agree that using user stories, a user story template and quality guidelines such as the INVEST mnemonic improve their productivity and the quality of their work deliverables. [Contribution] By combining the survey data with 21 semi-structured follow-up interviews, we present 12 findings on the usage and perception of user stories by practitioners that employ user stories in their everyday work environment.",
"title": ""
},
{
"docid": "d9eed063ea6399a8f33c6cbda3a55a62",
"text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "35f74f11a60ad58171b74e755cd0476b",
"text": "Recent studies show that the performances of face recognition systems degrade in presence of makeup on face. In this paper, a facial makeup detector is proposed to further reduce the impact of makeup in face recognition. The performance of the proposed technique is tested using three publicly available facial makeup databases. The proposed technique extracts a feature vector that captures the shape and texture characteristics of the input face. After feature extraction, two types of classifiers (i.e. SVM and Alligator) are applied for comparison purposes. In this study, we observed that both classifiers provide significant makeup detection accuracy. There are only few studies regarding facial makeup detection in the state-of-the art. The proposed technique is novel and outperforms the state-of-the art significantly.",
"title": ""
},
{
"docid": "1301030c091eeb23d43dd3bfa6763e77",
"text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "41e3ec35f9ca27eef6e70c963628281e",
"text": "An emerging problem in computer vision is the reconstruction of 3D shape and pose of an object from a single image. Hitherto, the problem has been addressed through the application of canonical deep learning methods to regress from the image directly to the 3D shape and pose labels. These approaches, however, are problematic from two perspectives. First, they are minimizing the error between 3D shapes and pose labels - with little thought about the nature of this “label error” when reprojecting the shape back onto the image. Second, they rely on the onerous and ill-posed task of hand labeling natural images with respect to 3D shape and pose. In this paper we define the new task of pose-aware shape reconstruction from a single image, and we advocate that cheaper 2D annotations of objects silhouettes in natural images can be utilized. We design architectures of pose-aware shape reconstruction which reproject the predicted shape back on to the image using the predicted pose. Our evaluation on several object categories demonstrates the superiority of our method for predicting pose-aware 3D shapes from natural images.",
"title": ""
},
{
"docid": "464f7d25cb2a845293a3eb8c427f872f",
"text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.",
"title": ""
},
{
"docid": "139adbef378fa0b195477e75d4d71e12",
"text": "Alu elements are primate-specific repeats and comprise 11% of the human genome. They have wide-ranging influences on gene expression. Their contribution to genome evolution, gene regulation and disease is reviewed.",
"title": ""
},
{
"docid": "9573c50b4cd5dfdcabd09676a757d06f",
"text": "Fall detection is a major challenge in the public healthcare domain, especially for the elderly as the decline of their physical fitness, and timely and reliable surveillance is necessary to mitigate the negative effects of falls. This paper develops a novel fall detection system based on a wearable device. The system monitors the movements of human body, recognizes a fall from normal daily activities by an effective quaternion algorithm, and automatically sends request for help to the caregivers with the patient's location.",
"title": ""
},
{
"docid": "4075eb657e87ad13e0f47ab36d33df54",
"text": "MOTIVATION\nControlled vocabularies such as the Medical Subject Headings (MeSH) thesaurus and the Gene Ontology (GO) provide an efficient way of accessing and organizing biomedical information by reducing the ambiguity inherent to free-text data. Different methods of automating the assignment of MeSH concepts have been proposed to replace manual annotation, but they are either limited to a small subset of MeSH or have only been compared with a limited number of other systems.\n\n\nRESULTS\nWe compare the performance of six MeSH classification systems [MetaMap, EAGL, a language and a vector space model-based approach, a K-Nearest Neighbor (KNN) approach and MTI] in terms of reproducing and complementing manual MeSH annotations. A KNN system clearly outperforms the other published approaches and scales well with large amounts of text using the full MeSH thesaurus. Our measurements demonstrate to what extent manual MeSH annotations can be reproduced and how they can be complemented by automatic annotations. We also show that a statistically significant improvement can be obtained in information retrieval (IR) when the text of a user's query is automatically annotated with MeSH concepts, compared to using the original textual query alone.\n\n\nCONCLUSIONS\nThe annotation of biomedical texts using controlled vocabularies such as MeSH can be automated to improve text-only IR. Furthermore, the automatic MeSH annotation system we propose is highly scalable and it generates improvements in IR comparable with those observed for manual annotations.",
"title": ""
},
{
"docid": "6e4dfb4c6974543246003350b5e3e07f",
"text": "Zero-shot object detection is an emerging research topic that aims to recognize and localize previously ‘unseen’ objects. This setting gives rise to several unique challenges, e.g., highly imbalanced positive vs. negative instance ratio, ambiguity between background and unseen classes and the proper alignment between visual and semantic concepts. Here, we propose an end-to-end deep learning framework underpinned by a novel loss function that puts more emphasis on difficult examples to avoid class imbalance. We call our objective the ‘Polarity loss’ because it explicitly maximizes the gap between positive and negative predictions. Such a margin maximizing formulation is important as it improves the visual-semantic alignment while resolving the ambiguity between background and unseen. Our approach is inspired by the embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word dictionary) and the perception of the physical world (visual imagery). To this end, we learn to attend to a dictionary of related semantic concepts that eventually refines the noisy semantic embeddings and helps establish a better synergy between visual and semantic domains. Our extensive results on MS-COCO and Pascal VOC datasets show as high as 14× mAP improvement over state of the art.1",
"title": ""
},
{
"docid": "e33e3e46a4bcaaae32a5743672476cd9",
"text": "This paper is based on the notion of data quality. It includes correctness, completeness and minimality for which a notational framework is shown. In long living databases the maintenance of data quality is a rst order issue. This paper shows that even well designed and implemented information systems cannot guarantee correct data in any circumstances. It is shown that in any such system data quality tends to decrease and therefore some data correction procedure should be applied from time to time. One aspect of increasing data quality is the correction of data values. Characteristics of a software tool which supports this data value correction process are presented and discussed.",
"title": ""
}
] |
scidocsrr
|
b35e238b5c76fec76d33eb3e0dae3c06
|
Using trust for collaborative filtering in eCommerce
|
[
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
}
] |
[
{
"docid": "c077231164a8a58f339f80b83e5b4025",
"text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.",
"title": ""
},
{
"docid": "6a5abcabca3d4bb0696a9f19dd5e358f",
"text": "Distributional models of meaning (see Turney and Pantel (2010) for an overview) are based on the pragmatic hypothesis that meanings of words are deducible from the contexts in which they are often used. This hypothesis is formalized using vector spaces, wherein a word is represented as a vector of cooccurrence statistics with a set of context dimensions. With the increasing availability of large corpora of text, these models constitute a well-established NLP technique for evaluating semantic similarities. Their methods however do not scale up to larger text constituents (i.e. phrases and sentences), since the uniqueness of multi-word expressions would inevitably lead to data sparsity problems, hence to unreliable vectorial representations. The problem is usually addressed by the provision of a compositional function, the purpose of which is to prepare a vector for a phrase or sentence by combining the vectors of the words therein. This line of research has led to the field of compositional distributional models of meaning (CDMs), where reliable semantic representations are provided for phrases, sentences, and discourse units such as dialogue utterances and even paragraphs or documents. As a result, these models have found applications in various NLP tasks, for example paraphrase detection; sentiment analysis; dialogue act tagging; machine translation; textual entailment; and so on, in many cases presenting stateof-the-art performance. Being the natural evolution of the traditional and well-studied distributional models at the word level, CDMs are steadily evolving to a popular and active area of NLP. The topic has inspired a number of workshops and tutorials in top CL conferences such as ACL and EMNLP, special issues at high-profile journals, and it attracts a substantial amount of submissions in annual NLP conferences. The approaches employed by CDMs are as much as diverse as statistical machine leaning (Baroni and Zamparelli, 2010), linear algebra (Mitchell and Lapata, 2010), simple category theory (Coecke et al., 2010), or complex deep learning architectures based on neural networks and borrowing ideas from image processing (Socher et al., 2012; Kalchbrenner et al., 2014; Cheng and Kartsaklis, 2015). Furthermore, they create opportunities for interesting novel research, related for example to efficient methods for creating tensors for relational words such as verbs and adjectives (Grefenstette and Sadrzadeh, 2011), the treatment of logical and functional words in a distributional setting (Sadrzadeh et al., 2013; Sadrzadeh et al., 2014), or the role of polysemy and the way it affects composition (Kartsaklis and Sadrzadeh, 2013; Cheng and Kartsaklis, 2015). The purpose of this tutorial is to provide a concise introduction to this emerging field, presenting the different classes of CDMs and the various issues related to them in sufficient detail. The goal is to allow the student to understand the general philosophy of each approach, as well as its advantages and limitations with regard to the other alternatives.",
"title": ""
},
{
"docid": "6ae4be7a85f7702ae76649d052d7c37d",
"text": "information technologies as “the ability to reformulate knowledge, to express oneself creatively and appropriately, and to produce and generate information (rather than simply to comprehend it).” Fluency, according to the report, “goes beyond traditional notions of computer literacy...[It] requires a deeper, more essential understanding and mastery of information technology for information processing, communication, and problem solving than does computer literacy as traditionally defined.” Scratch is a networked, media-rich programming environment designed to enhance the development of technological fluency at after-school centers in economically-disadvantaged communities. Just as the LEGO MindStorms robotics kit added programmability to an activity deeply rooted in youth culture (building with LEGO bricks), Scratch adds programmability to the media-rich and network-based activities that are most popular among youth at afterschool computer centers. Taking advantage of the extraordinary processing power of current computers, Scratch supports new programming paradigms and activities that were previously infeasible, making it better positioned to succeed than previous attempts to introduce programming to youth. In the past, most initiatives to improve technological fluency have focused on school classrooms. But there is a growing recognition that after-school centers and other informal learning settings can play an important role, especially in economicallydisadvantaged communities, where schools typically have few technological resources and many young people are alienated from the formal education system. Our working hypothesis is that, as kids work on personally meaningful Scratch projects such as animated stories, games, and interactive art, they will develop technological fluency, mathematical and problem solving skills, and a justifiable selfconfidence that will serve them well in the wider spheres of their lives. During the past decade, more than 2000 community technology centers (CTCs) opened in the United States, specifically to provide better access to technology in economically-disadvantaged communities. But most CTCs support only the most basic computer activities such as word processing, email, and Web browsing, so participants do not gain the type of fluency described in the NRC report. Similarly, many after-school centers (which, unlike CTCs, focus exclusively on youth) have begun to introduce computers, but they too tend to offer only introductory computer activities, sometimes augmented by educational games.",
"title": ""
},
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
},
{
"docid": "1ebb333d5a72c649cd7d7986f5bf6975",
"text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under stand. The notion of plans is introduced to ac count for general knowledge about novel situa tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys tem or any working computational system the res triction of world knowledge need not critically concern him. Our feeling is that an effective characteri zation of knowledge can result in a real under standing system in the not too distant future. We expect that programs based on the theory we out …",
"title": ""
},
{
"docid": "8a5bbfcb8084c0b331e18dcf64cdf915",
"text": "This paper describes wildcards, a new language construct designed to increase the flexibility of object-oriented type systems with parameterized classes. Based on the notion of use-site variance, wildcards provide a type safe abstraction over different instantiations of parameterized classes, by using '?' to denote unspecified type arguments. Thus they essentially unify the distinct families of classes often introduced by parametric polymorphism. Wildcards are implemented as part of the upcoming addition of generics to the Java™ programming language, and will thus be deployed world-wide as part of the reference implementation of the Java compiler javac available from Sun Microsystems, Inc. By providing a richer type system, wildcards allow for an improved type inference scheme for polymorphic method calls. Moreover, by means of a novel notion of wildcard capture, polymorphic methods can be used to give symbolic names to unspecified types, in a manner similar to the \"open\" construct known from existential types. Wildcards show up in numerous places in the Java Platform APIs of the upcoming release, and some of the examples in this paper are taken from these APIs.",
"title": ""
},
{
"docid": "1912f9ad509e446d3e34e3c6dccd4c78",
"text": "Lumbar disc herniation is a common male disease. In the past, More academic attention was directed to its relationship with lumbago and leg pain than to its association with andrological diseases. Studies show that central lumber intervertebral disc herniation may cause cauda equina injury and result in premature ejaculation, erectile dysfunction, chronic pelvic pain syndrome, priapism, and emission. This article presents an overview on the correlation between central lumbar intervertebral disc herniation and andrological diseases, focusing on the aspects of etiology, pathology, and clinical progress, hoping to invite more attention from andrological and osteological clinicians.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f82a57baca9a0381c9b2af0368a5531e",
"text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.",
"title": ""
},
{
"docid": "e4a74019c34413f8ace000512ab26da0",
"text": "Scaling the transaction throughput of decentralized blockchain ledgers such as Bitcoin and Ethereum has been an ongoing challenge. Two-party duplex payment channels have been designed and used as building blocks to construct linked payment networks, which allow atomic and trust-free payments between parties without exhausting the resources of the blockchain.\n Once a payment channel, however, is depleted (e.g., because transactions were mostly unidirectional) the channel would need to be closed and re-funded to allow for new transactions. Users are envisioned to entertain multiple payment channels with different entities, and as such, instead of refunding a channel (which incurs costly on-chain transactions), a user should be able to leverage his existing channels to rebalance a poorly funded channel.\n To the best of our knowledge, we present the first solution that allows an arbitrary set of users in a payment channel network to securely rebalance their channels, according to the preferences of the channel owners. Except in the case of disputes (similar to conventional payment channels), our solution does not require on-chain transactions and therefore increases the scalability of existing blockchains. In our security analysis, we show that an honest participant cannot lose any of its funds while rebalancing. We finally provide a proof of concept implementation and evaluation for the Ethereum network.",
"title": ""
},
{
"docid": "fc3283b1d81de45772ec730c1f5185f1",
"text": "In this paper, three different techniques which can be used for control of three phase PWM Rectifier are discussed. Those three control techniques are Direct Power Control, Indirect Power Control or Voltage Oriented Control and Hysteresis Control. The main aim of this paper is to compare and establish the merits and demerits of each technique in various aspects mainly regarding switching frequency hence switching loss, computation and transient state behavior. Each control method is studied in detail and simulated using Matlab/Simulink in order to make the comparison.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
},
{
"docid": "f9c938a98621f901c404d69a402647c7",
"text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.",
"title": ""
},
{
"docid": "16d2e0605d45c69302c71b8434b7a23a",
"text": "Emotions play an important role in human cognition, perception, decision making, and interaction. This paper presents a six-layer biologically inspired feedforward neural network to discriminate human emotions from EEG. The neural network comprises a shift register memory after spectral filtering for the input layer, and the estimation of coherence between each pair of input signals for the hidden layer. EEG data are collected from 57 healthy participants from eight locations while subjected to audio-visual stimuli. Discrimination of emotions from EEG is investigated based on valence and arousal levels. The accuracy of the proposed neural network is compared with various feature extraction methods and feedforward learning algorithms. The results showed that the highest accuracy is achieved when using the proposed neural network with a type of radial basis function.",
"title": ""
},
{
"docid": "a18da0c7d655fee44eebdf61c7371022",
"text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.",
"title": ""
},
{
"docid": "550e19033cb00938aed89eb3cce50a76",
"text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.",
"title": ""
},
{
"docid": "1615e93f027c6f6f400ce1cc7a1bb8aa",
"text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other",
"title": ""
},
{
"docid": "82fdd14f7766e8afe9b11a255073b3ce",
"text": "We develop a stochastic model of a simple protocol for the self-configuration of IP network interfaces. We describe the mean cost that incurs during a selfconfiguration phase and describe a trade-off between reliability and speed. We derive a cost function which we use to derive optimal parameters. We show that optimal cost and optimal reliability are qualities that cannot be achieved at the same time. Keywords—Embedded control software; IP; zeroconf protocol; cost optimisation",
"title": ""
},
{
"docid": "7a62e5a78eabbcbc567d5538a2f35434",
"text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt",
"title": ""
}
] |
scidocsrr
|
ebba225894ba7ed1352745abc47dd099
|
A SLIM WIDEBAND AND CONFORMAL UHF RFID TAG ANTENNA BASED ON U-SHAPED SLOTS FOR METALLIC OBJECTS
|
[
{
"docid": "48ea1d793f0ae2b79f406c87fe5980b5",
"text": "In this paper, we describe a UHF radio-frequency-identification tag test and measurement system based on National Instruments LabVIEW-controlled PXI RF hardware. The system operates in 800-1000-MHz frequency band with a variable output power up to 30 dBm and is capable of testing tags using Gen2 and other protocols. We explain testing methods and metrics, describe in detail the construction of our system, show its operation with real tag measurement examples, and draw general conclusions.",
"title": ""
}
] |
[
{
"docid": "44bb8c5202edadc2f14fa27c0fbb9705",
"text": "In this paper, a new Near Field Communication (NFC) antenna solution that can be used for portable devices with metal back cover is proposed. In particular, there are two holes on metal back cover, a slit between the two holes, and antenna coil located behind the metal cover. With such an arrangement, the shielding effect of the metal cover can be totally eliminated. Simulated and measured results of the proposed antenna are presented.",
"title": ""
},
{
"docid": "abc1be23f803390c2aadd58059eb177e",
"text": "In the atomic force microscope (AFM) scanning system, the piezoscanner is significant in realizing high-performance tasks. To cater to this demand, a novel compliant two-degrees-of-freedom (2-DOF) micro-/nanopositioning stage with modified lever displacement amplifiers is proposed in this paper, which can be selected to work in dual modes. Moreover, the modified double four-bar P (P denotes prismatic) joints are adopted in designing the flexible limbs. The established models for the mechanical performance evaluation in terms of kinetostatics, dynamics, and workspace are validated by finite-element analysis. After a series of dimension optimizations carried out via particle swarm optimization algorithm, a novel active disturbance rejection controller, including the components of nonlinearity tracking differentiator, extended state observer, and nonlinear state error feedback, is designed for automatically estimating and suppressing the plant uncertainties arising from the hysteresis nonlinearity, creep effect, sensor noises, and other unknown disturbances. The closed-loop control results based on simulation and prototype indicate that the two working natural frequencies of the proposed stage are approximated to be 805.19 and 811.31 Hz, the amplification ratio in two axes is about 4.2, and the workspace is around 120 ×120 μm2, while the cross-coupling between the two axes is kept within 2%. All of the results indicate that the developed micro-/nanopositioning system has a good property for high-performance AFM scanning.",
"title": ""
},
{
"docid": "d1041afcb50a490034740add2cce3f0d",
"text": "Inverse synthetic aperture radar imaging of moving targets with a stepped frequency waveform presents unique challenges. Intra-step target motion introduces phase discontinuities between frequency bands, which in turn produce degraded range side lobes. Frequency stitching of the stepped-frequency waveform to emulate a contiguous bandwidth can dramatically reduce the effective pulse repetition frequency, which then may impact the maximize target size that can be unambiguously measured and imaged via ISAR. This paper analyzes these effects and validates results via simulated data.",
"title": ""
},
{
"docid": "be7d32aeffecc53c5d844a8f90cd5ce0",
"text": "Wordnets play a central role in many natural language processing tasks. This paper introduces a multilingual editing system for the Open Multilingual Wordnet (OMW: Bond and Foster, 2013). Wordnet development, like most lexicographic tasks, is slow and expensive. Moving away from the original Princeton Wordnet (Fellbaum, 1998) development workflow, wordnet creation and expansion has increasingly been shifting towards an automated and/or interactive system facilitated task. In the particular case of human edition/expansion of wordnets, a few systems have been developed to aid the lexicographers’ work. Unfortunately, most of these tools have either restricted licenses, or have been designed with a particular language in mind. We present a webbased system that is capable of multilingual browsing and editing for any of the hundreds of languages made available by the OMW. All tools and guidelines are freely available under an open license.",
"title": ""
},
{
"docid": "0e002aae88332f8143e6f3a19c4c578b",
"text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.",
"title": ""
},
{
"docid": "37adbe33e4d83794fa85e7155a3e51d4",
"text": "Information technology matters to business success because it directly affects the mechanisms through which they create and capture value to earn a profit: IT is thus integral to a firm’s business-level strategy. Much of the extant research on the IT/strategy relationship, however, inaccurately frames IT as only a functionallevel strategy. This widespread under-appreciation of the business-level role of IT indicates a need for substantial retheorizing of its role in strategy and its complex and interdependent relationship with the mechanisms through which firms generate profit. Using a comprehensive framework of potential profit mechanisms, we argue that while IT activities remain integral to the functional-level strategies of the firm, they also play several significant roles in business strategy, with substantial performance implications. IT affects industry structure and the set of business-level strategic alternatives and value-creation opportunities that a firm may pursue. Along with complementary organizational changes, IT both enhances the firm’s current (ordinary) capabilities and enables new (dynamic) capabilities, including the flexibility to focus on rapidly changing opportunities or to abandon losing initiatives while salvaging substantial asset value. Such digitally attributable capabilities also determine how much of this value, once created, can be captured by the firm—and how much will be dissipated through competition or through the power of value chain partners, the governance of which itself depends on IT. We explore these business-level strategic roles of IT and discuss several provocative implications and future research directions in the converging information systems and strategy domains.",
"title": ""
},
{
"docid": "14fac04f802367a56a03fcdce88044f8",
"text": "Humidity measurement is one of the most significant issues in various areas of applications such as instrumentation, automated systems, agriculture, climatology and GIS. Numerous sorts of humidity sensors fabricated and developed for industrial and laboratory applications are reviewed and presented in this article. The survey frequently concentrates on the RH sensors based upon their organic and inorganic functional materials, e.g., porous ceramics (semiconductors), polymers, ceramic/polymer and electrolytes, as well as conduction mechanism and fabrication technologies. A significant aim of this review is to provide a distinct categorization pursuant to state of the art humidity sensor types, principles of work, sensing substances, transduction mechanisms, and production technologies. Furthermore, performance characteristics of the different humidity sensors such as electrical and statistical data will be detailed and gives an added value to the report. By comparison of overall prospects of the sensors it was revealed that there are still drawbacks as to efficiency of sensing elements and conduction values. The flexibility offered by thick film and thin film processes either in the preparation of materials or in the choice of shape and size of the sensor structure provides advantages over other technologies. These ceramic sensors show faster response than other types.",
"title": ""
},
{
"docid": "f4271386b02994f33a5eae3c6c67a879",
"text": "Joint FAO/WHO expert's consultation report defines probiotics as: Live microorganisms which when administered in adequate amounts confer a health benefit on the host. Most commonly used probiotics are Lactic acid bacteria (LAB) and bifidobacteria. There are other examples of species used as probiotics (certain yeasts and bacilli). Probiotic supplements are popular now a days. From the beginning of 2000, research on probiotics has increased remarkably. Probiotics are now day's widely studied for their beneficial effects in treatment of many prevailing diseases. Here we reviewed the beneficiary effects of probiotics in some diseases.",
"title": ""
},
{
"docid": "d03a86459dd461dcfac842ae55ae4ebb",
"text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.",
"title": ""
},
{
"docid": "e65c5458a27fc5367be4fd6024e8eb43",
"text": "The aims of this article are to review low-voltage vs high-voltage electrical burn complications in adults and to identify novel areas that are not recognized to improve outcomes. An extensive literature search on electrical burn injuries was performed using OVID MEDLINE, PubMed, and EMBASE databases from 1946 to 2015. Studies relating to outcomes of electrical injury in the adult population (≥18 years of age) were included in the study. Forty-one single-institution publications with a total of 5485 electrical injury patients were identified and included in the present study. Fourty-four percent of these patients were low-voltage injuries (LVIs), 38.3% high-voltage injuries (HVIs), and 43.7% with voltage not otherwise specified. Forty-four percentage of studies did not characterize outcomes according to LHIs vs HVIs. Reported outcomes include surgical, medical, posttraumatic, and others (long-term/psychological/rehabilitative), all of which report greater incidence rates in HVI than in LVI. Only two studies report on psychological outcomes such as posttraumatic stress disorder. Mortality rates from electrical injuries are 2.6% in LVI, 5.2% in HVI, and 3.7% in not otherwise specified. Coroner's reports revealed a ratio of 2.4:1 for deaths caused by LVI compared with HVI. HVIs lead to greater morbidity and mortality than LVIs. However, the results of the coroner's reports suggest that immediate mortality from LVI may be underestimated. Furthermore, on the basis of this analysis, we conclude that the majority of studies report electrical injury outcomes; however, the majority of them do not analyze complications by low vs high voltage and often lack long-term psychological and rehabilitation outcomes after electrical injury indicating that a variety of central aspects are not being evaluated or assessed.",
"title": ""
},
{
"docid": "5ee490a307a0b6108701225170690386",
"text": "An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in \"normal\" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity.",
"title": ""
},
{
"docid": "e325351fd8eda7ebebd46df0d0a80c19",
"text": "This paper proposes a CLL resonant dc-dc converter as an option for offline applications. This topology can achieve zero-voltage switching from zero load to a full load and zero-current switching for output rectifiers and makes the implementation of a secondary rectifier easy. This paper also presents a novel methodology for designing CLL resonant converters based on efficiency and holdup time requirements. An optimal transformer structure is proposed, which uses a current-type synchronous rectifier (SR) drive scheme. An 800-kHz 250-W CLL resonant converter prototype is built to verify the proposed circuit, design method, transformer structure, and SR drive scheme.",
"title": ""
},
{
"docid": "1d0dbfe15768703f7d5a1a56bbee3cac",
"text": "This paper investigates the effect of non-audit services on audit quality. Following the announcement of the requirement to disclose non-audit fees, approximately one-third of UK quoted companies disclosed before the requirement became effective. Whilst distressed companies were more likely to disclose early, auditor size, directors’ shareholdings and non-audit fees were not signi cantly correlated with early disclosure. These results cast doubt on the view that voluntary disclosure of non-audit fees was used to signal audit quality. The evidence also indicates a positive weakly signi cant relationship between disclosed non-audit fees and audit quali cations. This suggests that when non-audit fees are disclosed, the provision of non-audit services does not reduce audit quality.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "63339fb80c01c38911994cd326e483a3",
"text": "Older adults are becoming a significant percentage of the world's population. A multitude of factors, from the normal aging process to the progression of chronic disease, influence the nutrition needs of this very diverse group of people. Appropriate micronutrient intake is of particular importance but is often suboptimal. Here we review the available data regarding micronutrient needs and the consequences of deficiencies in the ever growing aged population.",
"title": ""
},
{
"docid": "9794653cc79a0835851fdc890e908823",
"text": "In 1988, Hickerson proved the celebrated “mock theta conjectures”, a collection of ten identities from Ramanujan’s “lost notebook” which express certain modular forms as linear combinations of mock theta functions. In the context of Maass forms, these identities arise from the peculiar phenomenon that two different harmonic Maass forms may have the same non-holomorphic parts. Using this perspective, we construct several infinite families of modular forms which are differences of mock theta functions.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "87ac799402c785e68db14636b0725523",
"text": "One of the challenges of creating applications from confederations of Internet-enabled things is the complexity of having to deal with spontaneously interacting and partially available heterogeneous devices. In this paper we describe the features of the MAGIC Broker 2 (MB2) a platform designed to offer a simple and consistent programming interface for collections of things. We report on the key abstractions offered by the platform and report on its use for developing two IoT applications involving spontaneous device interaction: 1) mobile phones and public displays, and 2) a web-based sensor actuator network portal called Sense Tecnic (STS). We discuss how the MB2 abstractions and implementation have evolved over time to the current design. Finally we present a preliminary performance evaluation and report qualitatively on the developers' experience of using our platform.",
"title": ""
},
{
"docid": "33cab0ec47af5e40d64e34f8ffc7dd6f",
"text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.",
"title": ""
},
{
"docid": "377e9bfebd979c25728fdede2af74335",
"text": "Youth Gangs: An Overview, the initial Bulletin in this series, brings together available knowledge on youth gangs by reviewing data and research. The author begins with a look at the history of youth gangs and their demographic characteristics. He then assesses the scope of the youth gang problem, including gang problems in juvenile detention and correctional facilities. A review of gang studies provides a clearer understanding of several issues. An extensive list of references is also included for further review.",
"title": ""
}
] |
scidocsrr
|
d44ed5c436ff5cec861c3e49d122fab2
|
Design space exploration of FPGA accelerators for convolutional neural networks
|
[
{
"docid": "5c8c391a10f32069849d743abc5e8210",
"text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.",
"title": ""
}
] |
[
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "8f29a231b801a018a6d18befc0d06d0b",
"text": "The paper introduces a deep learningbased Twitter hate-speech text classification system. The classifier assigns each tweet to one of four predefined categories: racism, sexism, both (racism and sexism) and non-hate-speech. Four Convolutional Neural Network models were trained on resp. character 4-grams, word vectors based on semantic information built using word2vec, randomly generated word vectors, and word vectors combined with character n-grams. The feature set was down-sized in the networks by maxpooling, and a softmax function used to classify tweets. Tested by 10-fold crossvalidation, the model based on word2vec embeddings performed best, with higher precision than recall, and a 78.3% F-score.",
"title": ""
},
{
"docid": "9b60816097ccdff7b1eec177aac0b9b8",
"text": "We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "5ea42460dc2bdd2ebc2037e35e01dca9",
"text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.",
"title": ""
},
{
"docid": "a9052b10f9750d58eb33b9e5d564ee6e",
"text": "Cyber Physical Systems (CPS) play significant role in shaping smart manufacturing systems. CPS integrate computation with physical processes where behaviors are represented in both cyber and physical parts of the system. In order to understand CPS in the context of smart manufacturing, an overview of CPS technologies, components, and relevant standards is presented. A detailed technical review of the existing engineering tools and practices from major control vendors has been conducted. Furthermore, potential research areas have been identified in order to enhance the tools functionalities and capabilities in supporting CPS development process.",
"title": ""
},
{
"docid": "a8f27679e13572d00d5eae3496cec014",
"text": "Today, we are forward to meeting an older people society in the world. The elderly people have become a high risk of dementia or depression. In recent years, with the rapid development of internet of things (IoT) techniques, it has become a feasible solution to build a system that combines IoT and cloud techniques for detecting and preventing the elderly dementia or depression. This paper proposes an IoT-based elderly behavioral difference warning system for early depression and dementia warning. The proposed system is composed of wearable smart glasses, a BLE-based indoor trilateration position, and a cloud-based service platform. As a result, the proposed system can not only reduce human and medical costs, but also improve the cure rate of depression or delay the deterioration of dementia.",
"title": ""
},
{
"docid": "2e4ac47cdc063d76089c17f30a379765",
"text": "Determination of the type and origin of the body fluids found at a crime scene can give important insights into crime scene reconstruction by supporting a link between sample donors and actual criminal acts. For more than a century, numerous types of body fluid identification methods have been developed, such as chemical tests, immunological tests, protein catalytic activity tests, spectroscopic methods and microscopy. However, these conventional body fluid identification methods are mostly presumptive, and are carried out for only one body fluid at a time. Therefore, the use of a molecular genetics-based approach using RNA profiling or DNA methylation detection has been recently proposed to supplant conventional body fluid identification methods. Several RNA markers and tDMRs (tissue-specific differentially methylated regions) which are specific to forensically relevant body fluids have been identified, and their specificities and sensitivities have been tested using various samples. In this review, we provide an overview of the present knowledge and the most recent developments in forensic body fluid identification and discuss its possible practical application to forensic casework.",
"title": ""
},
{
"docid": "05b4df16c35a89ee2a5b9ac482e0a297",
"text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.",
"title": ""
},
{
"docid": "e2c9c7c26436f0f7ef0067660b5f10b8",
"text": "The naive Bayesian classifier (NBC) is a simple yet very efficient classification technique in machine learning. But the unpractical condition independence assumption of NBC greatly degrades its performance. There are two primary ways to improve NBC's performance. One is to relax the condition independence assumption in NBC. This method improves NBC's accuracy by searching additional condition dependencies among attributes of the samples in a scope. It usually involves in very complex search algorithms. Another is to change the representation of the samples by creating new attributes from the original attributes, and construct NBC from these new attributes while keeping the condition independence assumption. Key problem of this method is to guarantee strong condition independencies among the new attributes. In the paper, a new means of making attribute set, which maps the original attributes to new attributes according to the information geometry and Fisher score, is presented, and then the FS-NBC on the new attributes is constructed. The condition dependence relation among the new attributes theoretically is discussed. We prove that these new attributes are condition independent of each other under certain conditions. The experimental results show that our method improves performance of NBC excellently",
"title": ""
},
{
"docid": "4816f221d67922009a308058139aa56b",
"text": "In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T ) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.",
"title": ""
},
{
"docid": "a0d34b1c003b7e88c2871deaaba761ed",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
},
{
"docid": "df1ea45a4b20042abd99418ff6d1f44e",
"text": "This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution.",
"title": ""
},
{
"docid": "da816b4a0aea96feceefe22a67c45be4",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
},
{
"docid": "3e727d70f141f52fb9c432afa3747ceb",
"text": "In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) [1]to generate adversarial examples, which can fool white-box models and blackbox models with a state of the art performance and won the SECOND place in the non-target task in CAAD 2018. In this section, we first introduce the whole architecture about our method, then we present our improvement on loss functions to generate adversarial examples satisfying the L∞ norm restriction in the non-targeted attack problem. Then we illustrate how to use a robust-enhance module to make our adversarial examples more robust and have better transfer-ability. At last we will show our method on how to attack an ensemble of models.",
"title": ""
},
{
"docid": "a0d1d59fc987d90e500b3963ac11b2ad",
"text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fd171b73ea88d9b862149e1c1d72aea8",
"text": "Localization of people and devices is one of the main building blocks of context aware systems since the user position represents the core information for detecting user's activities, devices activations, proximity to points of interest, etc. While for outdoor scenarios Global Positioning System (GPS) constitutes a reliable and easily available technology, for indoor scenarios GPS is largely unavailable. In this paper we present a range-based indoor localization system that exploits the Received Signal Strength (RSS) of Bluetooth Low Energy (BLE) beacon packets broadcast by anchor nodes and received by a BLE-enabled device. The method used to infer the user's position is based on stigmergy. We exploit the stigmergic marking process to create an on-line probability map identifying the user's position in the indoor environment.",
"title": ""
},
{
"docid": "b959bce5ea9db71d677586eb1b6f023e",
"text": "We consider autonomous racing of two cars and present an approach to formulate the decision making as a non-cooperative non-zero-sum game. The game is formulated by restricting both players to fulfill static track constraints as well as collision constraints which depend on the combined actions of the two players. At the same time the players try to maximize their own progress. In the case where the action space of the players is finite, the racing game can be reformulated as a bimatrix game. For this bimatrix game, we show that the actions obtained by a sequential maximization approach where only the follower considers the action of the leader are identical to a Stackelberg and a Nash equilibrium in pure strategies. Furthermore, we propose a game promoting blocking, by additionally rewarding the leading car for staying ahead at the end of the horizon. We show that this changes the Stackelberg equilibrium, but has a minor influence on the Nash equilibria. For an online implementation, we propose to play the games in a moving horizon fashion, and we present two methods for guaranteeing feasibility of the resulting coupled repeated games. Finally, we study the performance of the proposed approaches in simulation for a set-up that replicates the miniature race car tested at the Automatic Control Laboratory of ETH Zürich. The simulation study shows that the presented games can successfully model different racing behaviors and generate interesting racing situations.",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "bed29a89354c1dfcebbdde38d1addd1d",
"text": "Eosinophilic skin diseases, commonly termed as eosinophilic dermatoses, refer to a broad spectrum of skin diseases characterized by eosinophil infiltration and/or degranulation in skin lesions, with or without blood eosinophilia. The majority of eosinophilic dermatoses lie in the allergy-related group, including allergic drug eruption, urticaria, allergic contact dermatitis, atopic dermatitis, and eczema. Parasitic infestations, arthropod bites, and autoimmune blistering skin diseases such as bullous pemphigoid, are also common. Besides these, there are several rare types of eosinophilic dermatoses with unknown origin, in which eosinophil infiltration is a central component and affects specific tissue layers or adnexal structures of the skin, such as the dermis, subcutaneous fat, fascia, follicles, and cutaneous vessels. Some typical examples are eosinophilic cellulitis, granuloma faciale, eosinophilic pustular folliculitis, recurrent cutaneous eosinophilic vasculitis, and eosinophilic fasciitis. Although tissue eosinophilia is a common feature shared by these disorders, their clinical and pathological properties differ dramatically. Among these rare entities, eosinophilic pustular folliculitis may be associated with human immunodeficiency virus (HIV) infection or malignancies, and some other diseases, like eosinophilic fasciitis and eosinophilic cellulitis, may be associated with an underlying hematological disorder, while others are considered idiopathic. However, for most of these rare eosinophilic dermatoses, the causes and the pathogenic mechanisms remain largely unknown, and systemic, high-quality clinical investigations are needed for advances in better strategies for clinical diagnosis and treatment. Here, we present a comprehensive review on the etiology, pathogenesis, clinical features, and management of these rare entities, with an emphasis on recent advances and current consensus.",
"title": ""
}
] |
scidocsrr
|
aefff8b42a9a99977c326fb52e70fbaf
|
A Novel Association Rule Mining Method of Big Data for Power Transformers State Parameters Based on Probabilistic Graph Model
|
[
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
}
] |
[
{
"docid": "49f4fd5bcb184e64a9874b864979eb79",
"text": "A major research goal for compilers and environments is the automatic derivation of tools from formal specifications. However, the formal model of the language is often inadequate; in particular, LR(k) grammars are unable to describe the natural syntax of many languages, such as C++ and Fortran, which are inherently non-deterministic. Designers of batch compilers work around such limitations by combining generated components with ad hoc techniques (for instance, performing partial type and scope analysis in tandem with parsing). Unfortunately, the complexity of incremental systems precludes the use of batch solutions. The inability to generate incremental tools for important languages inhibits the widespread use of language-rich interactive environments.We address this problem by extending the language model itself, introducing a program representation based on parse dags that is suitable for both batch and incremental analysis. Ambiguities unresolved by one stage are retained in this representation until further stages can complete the analysis, even if the reaolution depends on further actions by the user. Representing ambiguity explicitly increases the number and variety of languages that can be analyzed incrementally using existing methods.To create this representation, we have developed an efficient incremental parser for general context-free grammars. Our algorithm combines Tomita's generalized LR parser with reuse of entire subtrees via state-matching. Disambiguation can occur statically, during or after parsing, or during semantic analysis (using existing incremental techniques); program errors that preclude disambiguation retsin multiple interpretations indefinitely. Our representation and analyses gain efficiency by exploiting the local nature of ambiguities: for the SPEC95 C programs, the explicit representation of ambiguity requires only 0.5% additional space and less than 1% additional time during reconstruction.",
"title": ""
},
{
"docid": "ad59ca3f7c945142baf9353eeb68e504",
"text": "This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.",
"title": ""
},
{
"docid": "63e58ac7e6f3b4a463e8f8182fee9be5",
"text": "In this work, we propose “global style tokens” (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-toend speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable “labels” they generate can be used to control synthesis in novel ways, such as varying speed and speaking style – independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.",
"title": ""
},
{
"docid": "3ea5607d04419aae36592b6dcce25304",
"text": "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.",
"title": ""
},
{
"docid": "298df39e9b415bc1eed95ed56d3f32df",
"text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.",
"title": ""
},
{
"docid": "2e1cb87045b5356a965aa52e9e745392",
"text": "Community detection is a common problem in graph data analytics that consists of finding groups of densely connected nodes with few connections to nodes outside of the group. In particular, identifying communities in large-scale networks is an important task in many scientific domains. In this review, we evaluated eight state-of-the-art and five traditional algorithms for overlapping and disjoint community detection on large-scale real-world networks with known ground-truth communities. These 13 algorithms were empirically compared using goodness metrics that measure the structural properties of the identified communities, as well as performance metrics that evaluate these communities against the ground-truth. Our results show that these two types of metrics are not equivalent. That is, an algorithm may perform well in terms of goodness metrics, but poorly in terms of performance metrics, or vice versa. © 2014 The Authors. WIREs Computational Statistics published by Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "4ef36c602963036f928b9dcb75592f78",
"text": "Health care-associated infections constitute one of the greatest challenges of modern medicine. Despite compelling evidence that proper hand washing can reduce the transmission of pathogens to patients and the spread of antimicrobial resistance, the adherence of health care workers to recommended hand-hygiene practices has remained unacceptably low. One of the key elements in improving hand-hygiene practice is the use of an alcohol-based hand rub instead of washing with soap and water. An alcohol-based hand rub requires less time, is microbiologically more effective, and is less irritating to skin than traditional hand washing with soap and water. Therefore, alcohol-based hand rubs should replace hand washing as the standard for hand hygiene in health care settings in all situations in which the hands are not visibly soiled. It is also important to change gloves between each patient contact and to use hand-hygiene procedures after glove removal. Reducing health care-associated infections requires that health care workers take responsibility for ensuring that hand hygiene becomes an everyday part of patient care.",
"title": ""
},
{
"docid": "ef3b9dd6b463940bc57cdf7605c24b1e",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "3a757d129c52b5c07c514d613795afce",
"text": "Camera motion estimation is useful for a range of applications. Usually, feature tracking is performed through the sequence of images to determine correspondences. Furthermore, robust statistical techniques are normally used to handle large number of outliers in correspondences. This paper proposes a new method that avoids both. Motion is calculated between two consecutive stereo images without any pre-knowledge or prediction about feature location or the possibly large camera movement. This permits a lower frame rate and almost arbitrary movements. Euclidean constraints are used to incrementally select inliers from a set of initial correspondences, instead of using robust statistics that has to handle all inliers and outliers together. These constraints are so strong that the set of initial correspondences can contain several times more outliers than inliers. Experiments on a worst-case stereo sequence show that the method is robust, accurate and can be used in real-time.",
"title": ""
},
{
"docid": "d026ebfc24e3e48d0ddb373f71d63162",
"text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.",
"title": ""
},
{
"docid": "e0a8035f9e61c78a482f2e237f7422c6",
"text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University",
"title": ""
},
{
"docid": "4872da79e7d01e8bb2a70ab17c523118",
"text": "In recent years, social media has become a customer touch-point for the business functions of marketing, sales and customer service. We aim to show that intention analysis might be useful to these business functions and that it can be performed effectively on short texts (at the granularity level of a single sentence). We demonstrate a scheme of categorization of intentions that is amenable to automation using simple machine learning techniques that are language-independent. We discuss the grounding that this scheme of categorization has in speech act theory. In the demonstration we go over a number of usage scenarios in an attempt to show that the use of automatic intention detection tools would benefit the business functions of sales, marketing and service. We also show that social media can be used not just to convey pleasure or displeasure (that is, to express sentiment) but also to discuss personal needs and to report problems (to express intentions). We evaluate methods for automatically discovering intentions in text, and establish that it is possible to perform intention analysis on social media with an accuracy of 66.97%± 0.10%.",
"title": ""
},
{
"docid": "ce12e1d38a2757c621a50209db5ce008",
"text": "Schloss Reisensburg. Physica-Verlag, 1994. Summary Traditional tests of the accuracy of statistical software have been based on a few limited paradigms for ordinary least squares regression. Test suites based on these criteria served the statistical computing community well when software was limited to a few simple procedures. Recent developments in statistical computing require both more and less sophisticated measures, however. We need tests for a broader variety of procedures and ones which are more likely to reveal incompetent programming. This paper summarizes these issues.",
"title": ""
},
{
"docid": "04b7d1197e9e5d78e948e0c30cbdfcfe",
"text": "Context: Software development depends significantly on team performance, as does any process that involves human interaction. Objective: Most current development methods argue that teams should self-manage. Our objective is thus to provide a better understanding of the nature of self-managing agile teams, and the teamwork challenges that arise when introducing such teams. Method: We conducted extensive fieldwork for 9 months in a software development company that introduced Scrum. We focused on the human sensemaking, on how mechanisms of teamwork were understood by the people involved. Results: We describe a project through Dickinson and McIntyre’s teamwork model, focusing on the interrelations between essential teamwork components. Problems with team orientation, team leadership and coordination in addition to highly specialized skills and corresponding division of work were important barriers for achieving team effectiveness. Conclusion: Transitioning from individual work to self-managing teams requires a reorientation not only by developers but also by management. This transition takes time and resources, but should not be neglected. In addition to Dickinson and McIntyre’s teamwork components, we found trust and shared mental models to be of fundamental importance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "29cbdeb95a221820a6425e1249763078",
"text": "The concept of “Industry 4.0” that covers the topics of Internet of Things, cyber-physical system, and smart manufacturing, is a result of increasing demand of mass customized manufacturing. In this paper, a smart manufacturing framework of Industry 4.0 is presented. In the proposed framework, the shop-floor entities (machines, conveyers, etc.), the smart products and the cloud can communicate and negotiate interactively through networks. The shop-floor entities can be considered as agents based on the theory of multi-agent system. These agents implement dynamic reconfiguration in a collaborative manner to achieve agility and flexibility. However, without global coordination, problems such as load-unbalance and inefficiency may occur due to different abilities and performances of agents. Therefore, the intelligent evaluation and control algorithms are proposed to reduce the load-unbalance with the assistance of big data feedback. The experimental results indicate that the presented algorithms can easily be deployed in smart manufacturing system and can improve both load-balance and efficiency.",
"title": ""
},
{
"docid": "ff5d2e3b2c2e5200f70f2644bbc521d6",
"text": "The idea that the conceptual system draws on sensory and motor systems has received considerable experimental support in recent years. Whether the tight coupling between sensory-motor and conceptual systems is modulated by factors such as context or task demands is a matter of controversy. Here, we tested the context sensitivity of this coupling by using action verbs in three different types of sentences in an fMRI study: literal action, apt but non-idiomatic action metaphors, and action idioms. Abstract sentences served as a baseline. The result showed involvement of sensory-motor areas for literal and metaphoric action sentences, but not for idiomatic ones. A trend of increasing sensory-motor activation from abstract to idiomatic to metaphoric to literal sentences was seen. These results support a gradual abstraction process whereby the reliance on sensory-motor systems is reduced as the abstractness of meaning as well as conventionalization is increased, highlighting the context sensitive nature of semantic processing.",
"title": ""
},
{
"docid": "1dcc48994fada1b46f7b294e08f2ed5d",
"text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.",
"title": ""
},
{
"docid": "cf5205e3b27867324ef86f18083653de",
"text": "Sometimes, in order to properly restore teeth, surgical intervention in the form of a crown-lengthening procedure is required. Crown lengthening is a periodontal resective procedure, aimed at removing supporting periodontal structures to gain sound tooth structure above the alveolar crest level. Periodontal health is of paramount importance for all teeth, both sound and restored. For the restorative dentist to utilize crown lengthening, it is important to understand the concept of biologic width, indications, techniques and other principles. This article reviews these basic concepts of clinical crown lengthening and presents four clinical cases utilizing crown lengthening as an integral part of treatments, to restore teeth and their surrounding tissues to health.",
"title": ""
},
{
"docid": "fb87648c3bb77b1d9b162a8e9dbc5e86",
"text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"title": ""
},
{
"docid": "be0f836ec6431b74342b670921ac41f7",
"text": "This paper addresses the issue of expert finding in a social network. The task of expert finding, as one of the most important research issues in social networks, is aimed at identifying persons with relevant expertise or experience for a given topic. In this paper, we propose a propagation-based approach that takes into consideration of both person local information and network information (e.g. relationships between persons). Experimental results show that our approach can outperform the baseline approach.",
"title": ""
}
] |
scidocsrr
|
01c3e01d851d2eea8a3d24dcf1cc9afa
|
New prototype of hybrid 3D-biometric facial recognition system
|
[
{
"docid": "573f12acd3193045104c7d95bbc89f78",
"text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.",
"title": ""
}
] |
[
{
"docid": "ac29d60761976a263629a93167516fde",
"text": "Abstruct1-V power supply high-speed low-power digital circuit technology with 0.5-pm multithreshold-voltage CMOS (MTCMOS) is proposed. This technology features both lowthreshold voltage and high-threshold voltage MOSFET’s in a single LSI. The low-threshold voltage MOSFET’s enhance speed Performance at a low supply voltage of 1 V or less, while the high-threshold voltage MOSFET’s suppress the stand-by leakage current during the sleep period. This technology has brought about logic gate characteristics of a 1.7-11s propagation delay time and 0.3-pW/MHz/gate power dissipation with a standard load. In addition, an MTCMOS standard cell library has been developed so that conventional CAD tools can be used to lay out low-voltage LSI’s. To demonstrate MTCMOS’s effectiveness, a PLL LSI based on standard cells was designed as a carrying vehicle. 18-MHz operation at 1 V was achieved using a 0.5-pm CMOS process.",
"title": ""
},
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "5d21df36697616719bcc3e0ee22a08bd",
"text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics",
"title": ""
},
{
"docid": "4c12d10fd9c2a12e56b56f62f99333f3",
"text": "The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders.",
"title": ""
},
{
"docid": "705b2a837b51ac5e354e1ec0df64a52a",
"text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).",
"title": ""
},
{
"docid": "2549177f9367d5641a7fc4dfcfaf5c0a",
"text": "Educational data mining is an emerging trend, concerned with developing methods for exploring the huge data that come from the educational system. This data is used to derive the knowledge which is useful in decision making. EDM methods are useful to measure the performance of students, assessment of students and study students’ behavior etc. In recent years, Educational data mining has proven to be more successful at many of the educational statistics problems due to enormous computing power and data mining algorithms. This paper surveys the history and applications of data mining techniques in the educational field. The objective is to introduce data mining to traditional educational system, web-based educational system, intelligent tutoring system, and e-learning. This paper describes how to apply the main data mining methods such as prediction, classification, relationship mining, clustering, and",
"title": ""
},
{
"docid": "9b7ca6e8b7bf87ef61e70ab4c720ec40",
"text": "The support vector machine (SVM) is a widely used tool in classification problems. The SVM trains a classifier by solving an optimization problem to decide which instances of the training data set are support vectors, which are the necessarily informative instances to form the SVM classifier. Since support vectors are intact tuples taken from the training data set, releasing the SVM classifier for public use or shipping the SVM classifier to clients will disclose the private content of support vectors. This violates the privacy-preserving requirements for some legal or commercial reasons. The problem is that the classifier learned by the SVM inherently violates the privacy. This privacy violation problem will restrict the applicability of the SVM. To the best of our knowledge, there has not been work extending the notion of privacy preservation to tackle this inherent privacy violation problem of the SVM classifier. In this paper, we exploit this privacy violation problem, and propose an approach to postprocess the SVM classifier to transform it to a privacy-preserving classifier which does not disclose the private content of support vectors. The postprocessed SVM classifier without exposing the private content of training data is called Privacy-Preserving SVM Classifier (abbreviated as PPSVC). The PPSVC is designed for the commonly used Gaussian kernel function. It precisely approximates the decision function of the Gaussian kernel SVM classifier without exposing the sensitive attribute values possessed by support vectors. By applying the PPSVC, the SVM classifier is able to be publicly released while preserving privacy. We prove that the PPSVC is robust against adversarial attacks. The experiments on real data sets show that the classification accuracy of the PPSVC is comparable to the original SVM classifier.",
"title": ""
},
{
"docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97",
"text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.",
"title": ""
},
{
"docid": "641811eac0e8a078cf54130c35fd6511",
"text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.",
"title": ""
},
{
"docid": "23bf81699add38814461d5ac3e6e33db",
"text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.",
"title": ""
},
{
"docid": "f6dd10d4b400234a28b221d0527e71c0",
"text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.",
"title": ""
},
{
"docid": "6fad371eecbb734c1e54b8fb9ae218c4",
"text": "Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies.",
"title": ""
},
{
"docid": "13bd6515467934ba7855f981fd4f1efd",
"text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "c0a75bf3a2d594fb87deb7b9f58a8080",
"text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "9f9719336bf6497d7c71590ac61a433b",
"text": "College and universities are increasingly using part-time, adjunct instructors on their faculties to facilitate greater fiscal flexibility. However, critics argue that the use of adjuncts is causing the quality of higher education to deteriorate. This paper addresses questions about the impact of adjuncts on student outcomes. Using a unique dataset of public four-year colleges in Ohio, we quantify how having adjunct instructors affects student persistence after the first year. Because students taking courses from adjuncts differ systematically from other students, we use an instrumental variable strategy to address concerns about biases. The findings suggest that, in general, students taking an \"adjunct-heavy\" course schedule in their first semester are adversely affected. They are less likely to persist into their second year. We reconcile these findings with previous research that shows that adjuncts may encourage greater student interest in terms of major choice and subsequent enrollments in some disciplines, most notably fields tied closely to specific professions. The authors are grateful for helpful suggestions from Ronald Ehrenberg and seminar participants at the NBER Labor Studies Meetings. The authors also thank the Ohio Board of Regents for their support during this research project. Rod Chu, Darrell Glenn, Robert Sheehan, and Andy Lechler provided invaluable access and help with the data. Amanda Starc, James Carlson, Erin Riley, and Suzan Akin provided excellent research assistance. All opinions and mistakes are our own. The authors worked equally on the project and are listed alphabetically.",
"title": ""
},
{
"docid": "115fb4dcd7d5a1240691e430cd107dce",
"text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.",
"title": ""
}
] |
scidocsrr
|
d956c35ab4e217a8c4517f565197d4a9
|
Pressure ulcer prevention and healing using alternating pressure mattress at home: the PARESTRY project.
|
[
{
"docid": "511c90eadbbd4129fdf3ee9e9b2187d3",
"text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.",
"title": ""
},
{
"docid": "df5c384e9fb6ba57a5bbd7fef44ce5f0",
"text": "CONTEXT\nPressure ulcers are common in a variety of patient settings and are associated with adverse health outcomes and high treatment costs.\n\n\nOBJECTIVE\nTo systematically review the evidence examining interventions to prevent pressure ulcers.\n\n\nDATA SOURCES AND STUDY SELECTION\nMEDLINE, EMBASE, and CINAHL (from inception through June 2006) and Cochrane databases (through issue 1, 2006) were searched to identify relevant randomized controlled trials (RCTs). UMI Proquest Digital Dissertations, ISI Web of Science, and Cambridge Scientific Abstracts were also searched. All searches used the terms pressure ulcer, pressure sore, decubitus, bedsore, prevention, prophylactic, reduction, randomized, and clinical trials. Bibliographies of identified articles were further reviewed.\n\n\nDATA SYNTHESIS\nFifty-nine RCTs were selected. Interventions assessed in these studies were grouped into 3 categories, ie, those addressing impairments in mobility, nutrition, or skin health. Methodological quality for the RCTs was variable and generally suboptimal. Effective strategies that addressed impaired mobility included the use of support surfaces, mattress overlays on operating tables, and specialized foam and specialized sheepskin overlays. While repositioning is a mainstay in most pressure ulcer prevention protocols, there is insufficient evidence to recommend specific turning regimens for patients with impaired mobility. In patients with nutritional impairments, dietary supplements may be beneficial. The incremental benefit of specific topical agents over simple moisturizers for patients with impaired skin health is unclear.\n\n\nCONCLUSIONS\nGiven current evidence, using support surfaces, repositioning the patient, optimizing nutritional status, and moisturizing sacral skin are appropriate strategies to prevent pressure ulcers. Although a number of RCTs have evaluated preventive strategies for pressure ulcers, many of them had important methodological limitations. There is a need for well-designed RCTs that follow standard criteria for reporting nonpharmacological interventions and that provide data on cost-effectiveness for these interventions.",
"title": ""
}
] |
[
{
"docid": "0e60cb8f9147f5334c3cfca2880c2241",
"text": "The quest for automatic Programming is the holy grail of artificial intelligence. The dream of having computer programs write other useful computer programs has haunted researchers since the nineteen fifties. In Genetic Progvamming III Darwinian Invention and Problem Solving (GP?) by John R. Koza, Forest H. Bennet 111, David Andre, and Martin A. Keane, the authors claim that the first inscription on this trophy should be the name Genetic Programming (GP). GP is about applying evolutionary algorithms to search the space of computer programs. The authors paraphrase Arthur Samuel of 1959 and argue that with this method it is possible to tell the computer what to do without telling it explicitly how t o do it.",
"title": ""
},
{
"docid": "9001f640ae3340586f809ab801f78ec0",
"text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.",
"title": ""
},
{
"docid": "6a15a0a0b9b8abc0e66fa9702cc3a573",
"text": "Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns. In this work, we propose entity2rec, a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendation. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming a number of state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.",
"title": ""
},
{
"docid": "dae877409dca88fc6fed5cf6536e65ad",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "a5f17126a90b45921f70439ff96a0091",
"text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"title": ""
},
{
"docid": "4cdef79370abcd380357c8be92253fa5",
"text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.",
"title": ""
},
{
"docid": "cc90d1ac6aa63532282568f66ecd25fd",
"text": "Melphalan has been used in the treatment of various hematologic malignancies for almost 60 years. Today it is part of standard therapy for multiple myeloma and also as part of myeloablative regimens in association with autologous allogenic stem cell transplantation. Melflufen (melphalan flufenamide ethyl ester, previously called J1) is an optimized derivative of melphalan providing targeted delivery of active metabolites to cells expressing aminopeptidases. The activity of melflufen has compared favorably with that of melphalan in a series of in vitro and in vivo experiments performed preferentially on different solid tumor models and multiple myeloma. Melflufen is currently being evaluated in a clinical phase I/II trial in relapsed or relapsed and refractory multiple myeloma. Cytotoxicity of melflufen was assayed in lymphoma cell lines and in primary tumor cells with the Fluorometric Microculture Cytotoxicity Assay and cell cycle analyses was performed in two of the cell lines. Melflufen was also investigated in a xenograft model with subcutaneous lymphoma cells inoculated in mice. Melflufen showed activity with cytotoxic IC50-values in the submicromolar range (0.011-0.92 μM) in the cell lines, corresponding to a mean of 49-fold superiority (p < 0.001) in potency vs. melphalan. In the primary cultures melflufen yielded slightly lower IC50-values (2.7 nM to 0.55 μM) and an increased ratio vs. melphalan (range 13–455, average 108, p < 0.001). Treated cell lines exhibited a clear accumulation in the G2/M-phase of the cell cycle. Melflufen also showed significant activity and no, or minimal side effects in the xenografted animals. This study confirms previous reports of a targeting related potency superiority of melflufen compared to that of melphalan. Melflufen was active in cell lines and primary cultures of lymphoma cells, as well as in a xenograft model in mice and appears to be a candidate for further evaluation in the treatment of this group of malignant diseases.",
"title": ""
},
{
"docid": "b3f5176f49b467413d172134b1734ed8",
"text": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.",
"title": ""
},
{
"docid": "1768ecf6a2d8a42ea701d7f242edb472",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
},
{
"docid": "be9971903bf3d754ed18cc89cf254bd1",
"text": "This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.",
"title": ""
},
{
"docid": "43233e45f07b80b8367ac1561356888d",
"text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.",
"title": ""
},
{
"docid": "65b2d6ea5e1089c52378b4fd6386224c",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "ffa5ae359807884c2218b92d2db2a584",
"text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.",
"title": ""
},
{
"docid": "9bce495ed14617fe05086f06be8279e0",
"text": "In previous chapters we reviewed Bayesian neural networks (BNNs) and historical techniques for approximate inference in these, as well as more recent approaches. We discussed the advantages and disadvantages of different techniques, examining their practicality. This, perhaps, is the most important aspect of modern techniques for approximate inference in BNNs. The field of deep learning is pushed forward by practitioners, working on real-world problems. Techniques which cannot scale to complex models with potentially millions of parameters, scale well with large amounts of data, need well studied models to be radically changed, or are not accessible to engineers, will simply perish. In this chapter we will develop on the strand of work of [Graves, 2011; Hinton and Van Camp, 1993], but will do so from the Bayesian perspective rather than the information theory one. Developing Bayesian approaches to deep learning, we will tie approximate BNN inference together with deep learning stochastic regularisation techniques (SRTs) such as dropout. These regularisation techniques are used in many modern deep learning tools, allowing us to offer a practical inference technique. We will start by reviewing in detail the tools used by [Graves, 2011]. We extend on these with recent research, commenting and analysing the variance of several stochastic estimators in variational inference (VI). Following that we will tie these derivations to SRTs, and propose practical techniques to obtain model uncertainty, even from existing models. We finish the chapter by developing specific examples for image based models (CNNs) and sequence based models (RNNs). These will be demonstrated in chapter 5, where we will survey recent research making use of the suggested tools in real-world problems.",
"title": ""
},
{
"docid": "87b67f9ed23c27a71b6597c94ccd6147",
"text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"title": ""
},
{
"docid": "56ff9c1be08569b6a881b070b0173797",
"text": "This paper examines a set of commercially representative embedded programs and compares them to an existing benchmark suite, SPEC2000. A new version of SimpleScalar that has been adapted to the ARM instruction set is used to characterize the performance of the benchmarks using configurations similar to current and next generation embedded processors. Several characteristics distinguish the representative embedded programs from the existing SPEC benchmarks including instruction distribution, memory behavior, and available parallelism. The embedded benchmarks, called MiBench, are freely available to all researchers.",
"title": ""
},
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
},
{
"docid": "1ff5526e4a18c1e59b63a3de17101b11",
"text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
},
{
"docid": "8cb5659bdbe9d376e2a3b0147264d664",
"text": "Group brainstorming is widely adopted as a design method in the domain of software development. However, existing brainstorming literature has consistently proven group brainstorming to be ineffective under the controlled laboratory settings. Yet, electronic brainstorming systems informed by the results of these prior laboratory studies have failed to gain adoption in the field because of the lack of support for group well-being and member support. Therefore, there is a need to better understand brainstorming in the field. In this work, we seek to understand why and how brainstorming is actually practiced, rather than how brainstorming practices deviate from formal brainstorming rules, by observing brainstorming meetings at Microsoft. The results of this work show that, contrary to the conventional brainstorming practices, software teams at Microsoft engage heavily in the constraint discovery process in their brainstorming meetings. We identified two types of constraints that occur in brainstorming meetings. Functional constraints are requirements and criteria that define the idea space, whereas practical constraints are limitations that prioritize the proposed solutions.",
"title": ""
}
] |
scidocsrr
|
e982cf99edeaf681206fcf5daaff79f7
|
Lip reading using a dynamic feature of lip images and convolutional neural networks
|
[
{
"docid": "d5c4e44514186fa1d82545a107e87c94",
"text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.",
"title": ""
}
] |
[
{
"docid": "adb02577e7fba530c2406fbf53571d14",
"text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.",
"title": ""
},
{
"docid": "720a3d65af4905cbffe74ab21d21dd3f",
"text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "e85b5115a489835bc58a48eaa727447a",
"text": "State-of-the art machine learning methods such as deep learning rely on large sets of hand-labeled training data. Collecting training data is prohibitively slow and expensive, especially when technical domain expertise is required; even the largest technology companies struggle with this challenge. We address this critical bottleneck with Snorkel, a new system for quickly creating, managing, and modeling training sets. Snorkel enables users to generate large volumes of training data by writing labeling functions, which are simple functions that express heuristics and other weak supervision strategies. These user-authored labeling functions may have low accuracies and may overlap and conflict, but Snorkel automatically learns their accuracies and synthesizes their output labels. Experiments and theory show that surprisingly, by modeling the labeling process in this way, we can train high-accuracy machine learning models even using potentially lower-accuracy inputs. Snorkel is currently used in production at top technology and consulting companies, and used by researchers to extract information from electronic health records, after-action combat reports, and the scientific literature. In this demonstration, we focus on the challenging task of information extraction, a common application of Snorkel in practice. Using the task of extracting corporate employment relationships from news articles, we will demonstrate and build intuition for a radically different way of developing machine learning systems which allows us to effectively bypass the bottleneck of hand-labeling training data.",
"title": ""
},
{
"docid": "4eec5be6b29425e025f9e1b23b742639",
"text": "There is increasing interest in sharing the experience of products and services on the web platform, and social media has opened a way for product and service providers to understand their consumers needs and expectations. This paper explores reviews by cloud consumers that reflect consumers experiences with cloud services. The reviews of around 6,000 cloud service users were analysed using sentiment analysis to identify the attitude of each review, and to determine whether the opinion expressed was positive, negative, or neutral. The analysis used two data mining tools, KNIME and RapidMiner, and the results were compared. We developed four prediction models in this study to predict the sentiment of users reviews. The proposed model is based on four supervised machine learning algorithms: K-Nearest Neighbour (k-NN), Nave Bayes, Random Tree, and Random Forest. The results show that the Random Forest predictions achieve 97.06% accuracy, which makes this model a better prediction model than the other three.",
"title": ""
},
{
"docid": "b988525d515588da8becc18c2aa21e82",
"text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.",
"title": ""
},
{
"docid": "73d3f51bdb913749665674ae8aea3a41",
"text": "Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state that can be robust to facial expression variations among different users is the topic of this paper. Facial animation parameters (FAPs) defined according to the ISO MPEG-4 standard are extracted by a robust facial analysis system, accompanied by appropriate confidence measures of the estimation accuracy. A novel neurofuzzy system is then created, based on rules that have been defined through analysis of FAP variations both at the discrete emotional space, as well as in the 2D continuous activation-evaluation one. The neurofuzzy system allows for further learning and adaptation to specific users' facial expression characteristics, measured though FAP estimation in real life application of the system, using analysis by clustering of the obtained FAP values. Experimental studies with emotionally expressive datasets, generated in the EC IST ERMIS project indicate the good performance and potential of the developed technologies.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "fc09e1c012016c75418ec33dfe5868d5",
"text": "Big data is the word used to describe structured and unstructured data. The term big data is originated from the web search companies who had to query loosely structured very large",
"title": ""
},
{
"docid": "36787667e41db8d9c164e39a89f0c533",
"text": "This paper presents an improvement of the well-known conventional three-phase diode bridge rectifier with dc output capacitor. The proposed circuit increases the power factor (PF) at the ac input and reduces the ripple current stress on the smoothing capacitor. The basic concept is the arrangement of an active voltage source between the output of the diode bridge and the smoothing capacitor which is controlled in a way that it emulates an ideal smoothing inductor. With this the input currents of the diode bridge which usually show high peak amplitudes are converted into a 120/spl deg/ rectangular shape which ideally results in a total PF of 0.955. The active voltage source mentioned before is realized by a low-voltage switch-mode converter stage of small power rating as compared to the output power of the rectifier. Starting with a brief discussion of basic three-phase rectifier techniques and of the drawbacks of three-phase diode bridge rectifiers with capacitive smoothing, the concept of the proposed active smoothing is described and the stationary operation is analyzed. Furthermore, control concepts as well as design considerations and analyses of the dynamic systems behavior are given. Finally, measurements taken from a laboratory model are presented.",
"title": ""
},
{
"docid": "1d1cec012f9f78b40a0931ae5dea53d0",
"text": "Recursive subdivision using interval arithmetic allows us to render CSG combinations of implicit function surfaces with or without anti -aliasing, Related algorithms will solve the collision detection problem for dynamic simulation, and allow us to compute mass. center of gravity, angular moments and other integral properties required for Newtonian dynamics. Our hidden surface algorithms run in ‘constant time.’ Their running times are nearly independent of the number of primitives in a scene, for scenes in which the visible details are not much smaller than the pixels. The collision detection and integration algorithms are utterly robust — collisions are never missed due 10 numerical error and we can provide guaranteed bounds on the values of integrals. CR",
"title": ""
},
{
"docid": "c24bd4156e65d57eda0add458304988c",
"text": "Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. Among them, graphene-based plasmonic miniaturized antennas (or shortly named, graphennas) are garnering growing interest in the field of communications. In light of their reduced size, in the micrometric range, and an expected radiation frequency of a few terahertz, graphennas offer means for the implementation of ultra-short-range wireless communications. Motivated by their high radiation frequency and potentially wideband nature, this paper presents a methodology for the time-domain characterization and evaluation of graphennas. The proposed framework is highly vertical, as it aims to build a bridge between technological aspects, antenna design, and communications. Using this approach, qualitative and quantitative analyses of a particular case of graphenna are carried out as a function of two critical design parameters, namely, chemical potential and carrier mobility. The results are then compared to the performance of equivalent metallic antennas. Finally, the suitability of graphennas for ultra-short-range communications is briefly discussed.",
"title": ""
},
{
"docid": "ed509de8786ee7b4ba0febf32d0c87f7",
"text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "895d5b01e984ef072b834976e0dfe378",
"text": "Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the GromovWasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "ec26505d813ed98ac3f840ea54358873",
"text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.",
"title": ""
},
{
"docid": "06ba0cd00209a7f4f200395b1662003e",
"text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.",
"title": ""
},
{
"docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a",
"text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.",
"title": ""
},
{
"docid": "5daeccb1a01df4f68f23c775828be41d",
"text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.",
"title": ""
}
] |
scidocsrr
|
6dc8bd3bc0c04c92fc132f2697cdf226
|
Combining control-flow integrity and static analysis for efficient and validated data sandboxing
|
[
{
"docid": "83c81ecb870e84d4e8ab490da6caeae2",
"text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.",
"title": ""
}
] |
[
{
"docid": "d945ae2fe20af58c2ca4812c797d361d",
"text": "Triple-negative breast cancers (TNBC) are genetically characterized by aberrations in TP53 and a low rate of activating point mutations in common oncogenes, rendering it challenging in applying targeted therapies. We performed whole-exome sequencing (WES) and RNA sequencing (RNA-seq) to identify somatic genetic alterations in mouse models of TNBCs driven by loss of Trp53 alone or in combination with Brca1 Amplifications or translocations that resulted in elevated oncoprotein expression or oncoprotein-containing fusions, respectively, as well as frameshift mutations of tumor suppressors were identified in approximately 50% of the tumors evaluated. Although the spectrum of sporadic genetic alterations was diverse, the majority had in common the ability to activate the MAPK/PI3K pathways. Importantly, we demonstrated that approved or experimental drugs efficiently induce tumor regression specifically in tumors harboring somatic aberrations of the drug target. Our study suggests that the combination of WES and RNA-seq on human TNBC will lead to the identification of actionable therapeutic targets for precision medicine-guided TNBC treatment.Significance: Using combined WES and RNA-seq analyses, we identified sporadic oncogenic events in TNBC mouse models that share the capacity to activate the MAPK and/or PI3K pathways. Our data support a treatment tailored to the genetics of individual tumors that parallels the approaches being investigated in the ongoing NCI-MATCH, My Pathway Trial, and ESMART clinical trials. Cancer Discov; 8(3); 354-69. ©2017 AACR.See related commentary by Natrajan et al., p. 272See related article by Matissek et al., p. 336This article is highlighted in the In This Issue feature, p. 253.",
"title": ""
},
{
"docid": "e2ce393fade02f0dfd20b9aca25afd0f",
"text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.",
"title": ""
},
{
"docid": "42b810b7ecd48590661cc5a538bec427",
"text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.",
"title": ""
},
{
"docid": "ca41837dd01a66259854c03b820a46ff",
"text": "We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.",
"title": ""
},
{
"docid": "05d8383eb6b1c6434f75849859c35fd0",
"text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.",
"title": ""
},
{
"docid": "f91ba4b37a2a9d80e5db5ace34e6e50a",
"text": "Bearing currents and shaft voltages of an induction motor are measured under hardand soft-switching inverter excitation. The objective is to investigate whether the soft-switching technologies can provide solutions for reducing the bearing currents and shaft voltages. Two of the prevailing soft-switching inverters, the resonant dc-link inverter and the quasi-resonant dc-link inverter, are tested. The results are compared with those obtained using the conventional hard-switching inverter. To ensure objective comparisons between the softand hard-switching inverters, all inverters were configured identically and drove the same induction motor under the same operating conditions when the test data were collected. An insightful explanation of the experimental results is also provided to help understand the mechanisms of bearing currents and shaft voltages produced in the inverter drives. Consistency between the bearing current theory and the experimental results has been demonstrated. Conclusions are then drawn regarding the effectiveness of the soft-switching technologies as a solution to the bearing current and shaft voltage problems.",
"title": ""
},
{
"docid": "3eaba817610278c4b1a82036ccfb6cc4",
"text": "We propose to use thought-provoking children's questions (TPCQs), namely Highlights BrainPlay questions, to drive artificial intelligence research. These questions are designed to stimulate thought and learning in children , and they can be used to do the same thing in AI systems. We introduce the TPCQ task, which consists of taking a TPCQ question as input and producing as output both (1) answers to the question and (2) learned generalizations. We discuss how BrainPlay questions stimulate learning. We analyze 244 BrainPlay questions, and we report statistics on question type, question class, answer cardinality, answer class, types of knowledge needed, and types of reasoning needed. We find that BrainPlay questions span many aspects of intelligence. We envision an AI system based on the society of mind (Minsky 1986; Minsky 2006) consisting of a multilevel architecture with diverse resources that run in parallel to jointly answer and learn from questions. Because the answers to BrainPlay questions and the generalizations learned from them are often highly open-ended, we suggest using human judges for evaluation.",
"title": ""
},
{
"docid": "b4b20c33b7f683cfead2fede8088f09b",
"text": "Bus protection is typically a station-wide protection function, as it uses the majority of the high voltage (HV) electrical signals available in a substation. All current measurements that define the bus zone of protection are needed. Voltages may be included in bus protection relays, as the number of voltages is relatively low, so little additional investment is not needed to integrate them into the protection system. This paper presents a new Distributed Bus Protection System that represents a step forward in the concept of a Smart Substation solution. This Distributed Bus Protection System has been conceived not only as a protection system, but as a platform that incorporates the data collection from the HV equipment in an IEC 61850 process bus scheme. This new bus protection system is still a distributed bus protection solution. As opposed to dedicated bay units, this system uses IEC 61850 process interface units (that combine both merging units and contact I/O) for data collection. The main advantage then, is that as the bus protection is deployed, it is also deploying the platform to do data collection for other protection, control, and monitoring functions needed in the substation, such as line, transformer, and feeder. By installing the data collection pieces, this provides for the simplification of engineering tasks, and substantial savings in wiring, number of components, cabinets, installation, and commissioning. In this way the new bus protection system is the gateway to process bus, as opposed to an addon to a process bus system. The paper analyzes and describes the new Bus Protection System as a new conceptual design for a Smart Substation, highlighting the advantages in a vision that comprises not only a single element, but the entire installation. Keyword: Current Transformer, Digital Fault Recorder, Fiber Optic Cable, International Electro Technical Commission, Process Interface Units",
"title": ""
},
{
"docid": "ca6001c3ed273b4f23565f4d40ddeb29",
"text": "Learning semantic representations and tree structures of bilingual phrases is beneficial for statistical machine translation. In this paper, we propose a new neural network model called Bilingual Correspondence Recursive Autoencoder (BCorrRAE) to model bilingual phrases in translation. We incorporate word alignments into BCorrRAE to allow it freely access bilingual constraints at different levels. BCorrRAE minimizes a joint objective on the combination of a recursive autoencoder reconstruction error, a structural alignment consistency error and a crosslingual reconstruction error so as to not only generate alignment-consistent phrase structures, but also capture different levels of semantic relations within bilingual phrases. In order to examine the effectiveness of BCorrRAE, we incorporate both semantic and structural similarity features built on bilingual phrase representations and tree structures learned by BCorrRAE into a state-of-the-art SMT system. Experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.55 BLEU points over the baseline.",
"title": ""
},
{
"docid": "f698b77df48a5fac4df7ba81b4444dd5",
"text": "Discontinuous-conduction mode (DCM) operation is usually employed in DC-DC converters for small inductor on printed circuit board (PCB) and high efficiency at light load. However, it is normally difficult for synchronous converter to realize the DCM operation, especially in high frequency applications, which requires a high speed and high precision comparator to detect the zero crossing point at cost of extra power losses. In this paper, a novel zero current detector (ZCD) circuit with an adaptive delay control loop for high frequency synchronous buck converter is presented. Compared to the conventional ZCD, proposed technique is proven to offer 8.5% efficiency enhancement when performed in a buck converter at the switching frequency of 4MHz and showed less sensitivity to the transistor mismatch of the sensor circuit.",
"title": ""
},
{
"docid": "5bebef3a6ca0d595b6b3232e18f8789f",
"text": "The usability of a software product has recently become a key software quality factor. The International Organization for Standardization (ISO) has developed a variety of models to specify and measure software usability but these individual models do not support all usability aspects. Furthermore, they are not yet well integrated into current software engineering practices and lack tool support. The aim of this research is to survey the actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existing limitations and address them by proposing an enhanced, normative model for the evaluation of software usability.",
"title": ""
},
{
"docid": "bac623d79d39991032fc46cc215b9fdd",
"text": "The convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive. For these applications, end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device. This dissertation designs and implements a new architectural element called a cloudlet, that arises from the convergence of mobile computing and cloud computing. Cloudlets represent the middle tier of a 3-tier hierarchy, mobile device — cloudlet — cloud, to achieve the right balance between cloud consolidation and network responsiveness. We first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation. We then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure. Finally, we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing, and address corresponding technical challenges.",
"title": ""
},
{
"docid": "0b71458d700565bec9b91318023243df",
"text": "The Humor Styles Questionnaire (HSQ; Martin et al., 2003) is one of the most frequently used questionnaires in humor research and has been adapted to several languages. The HSQ measures four humor styles (affiliative, self-enhancing, aggressive, and self-defeating), which should be adaptive or potentially maladaptive to psychosocial well-being. The present study analyzes the internal consistency, factorial validity, and factorial invariance of the HSQ on the basis of several German-speaking samples combined (total N = 1,101). Separate analyses were conducted for gender (male/female), age groups (16-24, 25-35, >36 years old), and countries (Germany/Switzerland). Internal consistencies were good for the overall sample and the demographic subgroups (.80-.89), with lower values obtained for the aggressive scale (.66-.73). Principal components and confirmatory factor analyses mostly supported the four-factor structure of the HSQ. Weak factorial invariance was found across gender and age groups, while strong factorial invariance was supported across countries. Two subsamples also provided self-ratings on ten styles of humorous conduct (n = 344) and of eight comic styles (n = 285). The four HSQ scales showed small to large correlations to the styles of humorous conduct (-.54 to .65) and small to medium correlations to the comic styles (-.27 to .42). The HSQ shared on average 27.5-35.0% of the variance with the styles of humorous conduct and 13.0-15.0% of the variance with the comic styles. Thus-despite similar labels-these styles of humorous conduct and comic styles differed from the HSQ humor styles.",
"title": ""
},
{
"docid": "e677799d3bee1b25e74dc6c547c1b6c2",
"text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.",
"title": ""
},
{
"docid": "fdaf0a7bc6dfa30d0c3ed3a96950d8c8",
"text": "In this article we exploit the discrete-time dynamics of a single neuron with self-connection to systematically design simple signal filters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configurations. Extending this neuro-module by two more recurrent neurons leads to versatile highand band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots.",
"title": ""
},
{
"docid": "2af0ef7c117ace38f44a52379c639e78",
"text": "Examination of a child with genital or anal disease may give rise to suspicion of sexual abuse. Dermatologic, traumatic, infectious, and congenital disorders may be confused with sexual abuse. Seven children referred to us are representative of such confusion.",
"title": ""
},
{
"docid": "52017fa7d6cf2e6a18304b121225fc6f",
"text": "In comparison to dense matrices multiplication, sparse matrices multiplication real performance for CPU is roughly 5–100 times lower when expressed in GFLOPs. For sparse matrices, microprocessors spend most of the time on comparing matrices indices rather than performing floating-point multiply and add operations. For 16-bit integer operations, like indices comparisons, computational power of the FPGA significantly surpasses that of CPU. Consequently, this paper presents a novel theoretical study how matrices sparsity factor influences the indices comparison to floating-point operation workload ratio. As a result, a novel FPGAs architecture for sparse matrix-matrix multiplication is presented for which indices comparison and floating-point operations are separated. We also verified our idea in practice, and the initial implementations results are very promising. To further decrease hardware resources required by the floating-point multiplier, a reduced width multiplication is proposed in the case when IEEE-754 standard compliance is not required.",
"title": ""
},
{
"docid": "6341eaeb32d0e25660de6be6d3943e81",
"text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.",
"title": ""
},
{
"docid": "47ef46ef69a23e393d8503154f110a81",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "028be19d9b8baab4f5982688e41bfec8",
"text": "The activation function for neurons is a prominent element in the deep learning architecture for obtaining high performance. Inspired by neuroscience findings, we introduce and define two types of neurons with different activation functions for artificial neural networks: excitatory and inhibitory neurons, which can be adaptively selected by selflearning. Based on the definition of neurons, in the paper we not only unify the mainstream activation functions, but also discuss the complementariness among these types of neurons. In addition, through the cooperation of excitatory and inhibitory neurons, we present a compositional activation function that leads to new state-of-the-art performance comparing to rectifier linear units. Finally, we hope that our framework not only gives a basic unified framework of the existing activation neurons to provide guidance for future design, but also contributes neurobiological explanations which can be treated as a window to bridge the gap between biology and computer science.",
"title": ""
}
] |
scidocsrr
|
ec3c9b3126a6eef574a0668a06629594
|
Comparison of Unigram, Bigram, HMM and Brill's POS tagging approaches for some South Asian languages
|
[
{
"docid": "89aa60cefe11758e539f45c5cba6f48a",
"text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html",
"title": ""
}
] |
[
{
"docid": "b428ee2a14b91fee7bb80058e782774d",
"text": "Recurrent connectionist networks are important because they can perform temporally extended tasks, giving them considerable power beyond the static mappings performed by the now-familiar multilayer feedforward networks. This ability to perform highly nonlinear dynamic mappings makes these networks particularly interesting to study and potentially quite useful in tasks which have an important temporal component not easily handled through the use of simple tapped delay lines. Some examples are tasks involving recognition or generation of sequential patterns and sensorimotor control. This report examines a number of learning procedures for adjusting the weights in recurrent networks in order to train such networks to produce desired temporal behaviors from input-output stream examples. The procedures are all based on the computation of the gradient of performance error with respect to network weights, and a number of strategies for computing the necessary gradient information are described. Included here are approaches which are familiar and have been rst described elsewhere, along with several novel approaches. One particular purpose of this report is to provide uniform and detailed descriptions and derivations of the various techniques in order to emphasize how they relate to one another. Another important contribution of this report is a detailed analysis of the computational requirements of the various approaches discussed.",
"title": ""
},
{
"docid": "4cb25adf48328e1e9d871940a97fdff2",
"text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.",
"title": ""
},
{
"docid": "3e570e415690daf143ea30a8554b0ac8",
"text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "997a0392359ae999dfca6a0d339ea27f",
"text": "Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, an increase in mean memory allocation may result in an increase in fault rate. Two paging algorithms, the page fault frequency and working set algorithms, are examined in terms of their anomaly potential, and reference string examples of various anomalies are presented. Two paging algorithm properties, the inclusion property and the generalized inclusion property, are discussed and the anomaly implications of these properties presented.",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "13150a58d86b796213501d26e4b41e5b",
"text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).",
"title": ""
},
{
"docid": "27d8022f6545503c1145d46dfd30c1db",
"text": "Research has demonstrated support for objectification theory and has established that music affects listeners’ thoughts and behaviors, however, no research to date joins these two fields. The present study considers potential effects of objectifying hip hop songs on female listeners. Among African American participants, exposure to an objectifying song resulted in increased self-objectification. However, among White participants, exposure to an objectifying song produced no measurable difference in self-objectification. This finding along with interview data suggests that white women distance themselves from objectifying hip hop songs, preventing negative effects of such music. EFFECTS OF OBJECTIFYING HIP HOP 3 The Effects of Objectifying Hip-Hop Lyrics on Female Listeners Music is an important part of adolescents’ and young adults’ lives. It is a way to learn about our social world, express emotions, and relax (Agbo-Quaye, 2010). Music today is highly social, shared and listened to in social situations as a way to bolster the mood or experience. However, the effects of music are not always positive. Considering this, how does music affect young adults? Specifically, how does hip-hop music with objectifying lyrics affect female listeners? To begin to answer this question, I will first present previous research on music’s effects, specifically the effects of aggressive, sexualized, and misogynistic lyrics. Next, I will discuss theories regarding the processing of lyrics. Another important aspect of this question is objectification theory, thus I will explain this theory and the evidence to support it. I will then discuss further applications of this theory to various visual media forms. Finally, I will describe gaps in research, as well as the importance of this study. Multiple studies have looked at the effects of music’s lyrics on listeners. Various aspects and trends in popular music have been considered. Anderson, Carnagey, and Eubanks (2003) examined the effects of songs with violent lyrics on listeners. Participants who had been exposed to songs with violent lyrics reported feeling more hostile than those who listened to songs with non-violent lyrics. Those exposed to violent lyrics also had an increase in aggressive thoughts. Researchers also considered trait hostility and found that, although correlated with state hostility, it did not account for the differences in condition. Other studies have explored music’s effects on behavior. One such study considered the effects of exposure to sexualized lyrics (Carpentier, Knobloch-Westerwick, & Blumhoff, 2007). After exposure to overtly sexualized pop lyrics, participants rated potential romantic partners EFFECTS OF OBJECTIFYING HIP HOP 4 with a stronger emphasis on sexual appeal in comparison to the ratings of those participants who heard nonsexual pop songs. Another study exposed male participants to either sexually aggressive misogynistic lyrics or neutral lyrics (Fischer & Greitemeyer, 2006). Those participants who had been exposed to the sexually aggressive lyrics demonstrated more aggressive behaviors towards females. The study was replicated with female participants and aggressive man-hating lyrics and similar results were found. Similarly, another study found that exposure to misogynous rap music influenced sexually aggressive behaviors (Barongan & Hall, 1995). Participants were exposed to either misogynous or neutral rap songs and then presented with three vignettes and were informed they would have to select one to share with a female confederate. Those who listened to the misogynous song selected the assaultive vignette at a significantly higher rate. The selection of the assaultive vignette demonstrated sexually aggressive behavior. These studies demonstrate the real and disturbing effects that music can have on listener’s behaviors. There are multiple theories as to why these lyrical effects are found. Some researchers suggest that social learning and cultivation theories are responsible (Sprankle & End, 2009). Both theories argue that our thoughts and our actions are influenced by what we see. Social learning theory suggests that observing others’ behaviors and the responses they receive will influence the observer’s behavior. As most rap music depicts the positive outcomes of increased sexual activity and objectification of women and downplays or omits the negative outcomes, listeners will start to engage in these activities and consider them acceptable. Cultivation theory argues that the more a person observes the world of sex portrayed in objectifying music, the more likely they are to believe that that world is reality. That is, the more they see “evidence” of EFFECTS OF OBJECTIFYING HIP HOP 5 the attitudes and behaviors portrayed in hip hop, the more likely they are to believe that such behaviors are normal. Cobb and Boettcher (2007) suggest that theories of priming and social stereotyping support the findings that exposure to misogynistic music increases sexist views. They also suggest that some observed gender differences in these responses are the result of different kinds of information processing. Women, as the targets of these lyrics, will process misogynistic lyrics centrally and will attempt to understand the information they are receiving more thoroughly. Thus, they are more likely to reject the lyrics. This finding highlights the importance of attention and how the lyrics are received and the impact these factors can have on listeners’ reactions. These theories were supported in their study as participants exposed to misogynistic music demonstrated few differences from the control group, in which participants were not exposed to any music, in levels of hostile and benevolent sexism (Cobb & Boettcher, 2007). However, exposure to nonmisogynistic rap resulted in significantly increased levels of hostile and benevolent sexism. Researchers suggested that this may be because the processing of misogynistic lyrics meant that listeners were aware of the sexism present in the lyrics and thus the music was unable to prime their latent sexism. However, we live in a society in which rap music is associated with misogyny and violence (Fried, 1999). When participants listened to nonmisogynistic lyrics this association was primed. Because the lyrics weren’t explicit the processing involved was not critical and these assumptions went unchallenged and latent sexism was primed. Objectification theory provides another hypothesis for the processing and potential effects of media. Objectification theory posits that in a society in which women are frequently objectified, that is, seen as bodies that perform tasks rather than as people, women begin to selfEFFECTS OF OBJECTIFYING HIP HOP 6 objectify, or see themselves as objects for others’ viewing (Fredrickson & Roberts, 1997). They internalize an outsider’s perspective of their body. This self-objectification comes with anxiety and shame as well as frequent appearance monitoring (Fredrickson & Roberts, 1997). The authors suggest that the frequent objectification and self-objectification that occurs in our society could contribute to depression and eating disorders. They also suggest that frequent selfmonitoring, shame, and anxiety could make it difficult to reach and maintain peak motivational states (that is, an extended period of time in which we are voluntarily absorbed in a challenging physical or mental task with the goal of accomplishing something that’s considered worthwhile). These states are psychologically beneficial. Multiple studies support this theory. One such study looked at the effects of being in a self-objectifying state on the ability to reach and maintain a peak motivational state (Fredrickson, Roberts, Noll, Quinn, & Twenge, 1998). Participants were asked to try on either a swimsuit or a sweater and spend some time in that article of clothing. After this time they were asked questions about their self-objectifying behaviors and attitudes, such as depressed mood, self-surveillance, and body shame. They were then asked to complete a difficult math task, an activity meant to produce a peak motivational state. A similar study was completed with members of different ethnic groups (Hebl, King, & Lin, 2004). In this study a nearly identical procedure was followed. In addition, researchers aimed to create a more objectifying state for men, having them wear Speedos rather than swim trunks. In both of these studies female participants wearing swimsuits performed significantly worse on the math test than female participants wearing sweaters. There were no significant differences between the swim trunks and sweater conditions for male participants. However, when male participants wore Speedos they performed significantly worse on the math test. Further, the results of measures of self-objectifying EFFECTS OF OBJECTIFYING HIP HOP 7 behaviors, like body shame and surveillance, were significantly higher for those in the swimsuit condition. These findings demonstrate support for objectification theory and suggest that it crosses ethnic boundaries. The decreased math scores for men in Speedos suggest that it is possible to put anyone in a self-objectifying state. However, it is women who most often find themselves in this situation in our society. With empirical support for the central premises of objectification theory, research has turned to effects of popular media on self-objectification of women. One such study looked at the links between music video consumption, self-surveillance, body esteem, dieting status, depressive symptoms, and math confidence (Grabe & Hyde, 2009). Researchers found a positive relationship between music video consumption, self-objectification, and the host of psychological factors proposed by Fredrickson and Roberts, such that as music video consumption increased, so did self-objectifying behaviors. Another study looked at the effects of portrayals of the thin ideal in m",
"title": ""
},
{
"docid": "41a54cd203b0964a6c3d9c2b3addff46",
"text": "Increasing occupancy rates and revenue by improving customer experience is the aim of modern hospitality organizations. To achieve these results, hotel managers need to have a deep knowledge of customers’ needs, behavior, and preferences and be aware of the ways in which the services delivered create value for the customers and then stimulate their retention and loyalty. In this article a methodological framework to analyze the guest–hotel relationship and to profile hotel guests is discussed, focusing on the process of designing a customer information system and particularly the guest information matrix on which the system database will be built.",
"title": ""
},
{
"docid": "b333be40febd422eae4ae0b84b8b9491",
"text": "BACKGROUND\nRarely, basal cell carcinomas (BCCs) have the potential to become extensively invasive and destructive, a phenomenon that has led to the term \"locally advanced BCC\" (laBCC). We identified and described the diverse settings that could be considered \"locally advanced\".\n\n\nMETHODS\nThe panel of experts included oncodermatologists, dermatological and maxillofacial surgeons, pathologists, radiotherapists and geriatricians. During a 1-day workshop session, an interactive flow/sequence of questions and inputs was debated.\n\n\nRESULTS\nDiscussion of nine cases permitted us to approach consensus concerning what constitutes laBCC. The expert panel retained three major components for the complete assessment of laBCC cases: factors of complexity related to the tumour itself, factors related to the operability and the technical procedure, and factors related to the patient. Competing risks of death should be precisely identified. To ensure homogeneous multidisciplinary team (MDT) decisions in different clinical settings, the panel aimed to develop a practical tool based on the three components.\n\n\nCONCLUSION\nThe grid presented is not a definitive tool, but rather, it is a method for analysing the complexity of laBCC.",
"title": ""
},
{
"docid": "b0d11ab83aa6ae18d1a2be7c8e8803b5",
"text": "Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.",
"title": ""
},
{
"docid": "508ce0c5126540ad7f46b8f375c50df8",
"text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
},
{
"docid": "913777c94a55329ddf42955900a51096",
"text": "In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal.",
"title": ""
},
{
"docid": "659deeead04953483a3ed6c5cc78cd76",
"text": "We describe ParsCit, a freely available, open-source imple entation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label th token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference string s from a plain text file, and to retrieve the citation contexts . The package comes with utilities to run it as a web service or as a standalone uti lity. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",
"title": ""
},
{
"docid": "6f410e93fa7ab9e9c4a7a5710fea88e2",
"text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.",
"title": ""
},
{
"docid": "fe77a632bae11d9333cd867960e47375",
"text": "Here we present a projection augmented reality (AR) based assistive robot, which we call the Pervasive Assistive Robot System (PARS). The PARS aims to improve the quality of life by of the elderly and less able-bodied. In particular, the proposed system will support dynamic display and monitoring systems, which will be helpful for older adults who have difficulty moving their limbs and who have a weak memory.We attempted to verify the usefulness of the PARS using various scenarios. We expected that PARSs will be used as assistive robots for people who experience physical discomfort in their daily lives.",
"title": ""
},
{
"docid": "97af4f8e35a7d773bb85969dd027800b",
"text": "For an intelligent transportation system (ITS), traffic incident detection is one of the most important issues, especially for urban area which is full of signaled intersections. In this paper, we propose a novel traffic incident detection method based on the image signal processing and hidden Markov model (HMM) classifier. First, a traffic surveillance system was set up at a typical intersection of china, traffic videos were recorded and image sequences were extracted for image database forming. Second, compressed features were generated through several image processing steps, image difference with FFT was used to improve the recognition rate. Finally, HMM was used for classification of traffic signal logics (East-West, West-East, South-North, North-South) and accident of crash, the total correct rate is 74% and incident recognition rate is 84%. We believe, with more types of incident adding to the database, our detection algorithm could serve well for the traffic surveillance system.",
"title": ""
}
] |
scidocsrr
|
9751bcc37c86fa0f0834e3c7a3ce1381
|
Robust Capped Norm Nonnegative Matrix Factorization: Capped Norm NMF
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
}
] |
[
{
"docid": "0dd78cb46f6d2ddc475fd887a0dc687c",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "299deaffdd1a494fc754b9e940ad7f81",
"text": "In this work, we study an important problem: learning programs from input-output examples. We propose a novel method to learn a neural program operating a domain-specific non-differentiable machine, and demonstrate that this method can be applied to learn programs that are significantly more complex than the ones synthesized before: programming language parsers from input-output pairs without knowing the underlying grammar. The main challenge is to train the neural program without supervision on execution traces. To tackle it, we propose: (1) LL machines and neural programs operating them to effectively regularize the space of the learned programs; and (2) a two-phase reinforcement learning-based search technique to train the model. Our evaluation demonstrates that our approach can successfully learn to parse programs in both an imperative language and a functional language, and achieve 100% test accuracy, while existing approaches’ accuracies are almost 0%. This is the first successful demonstration of applying reinforcement learning to train a neural program operating a non-differentiable machine that can fully generalize to test sets on a non-trivial task.",
"title": ""
},
{
"docid": "f58d69de4b5bcc4100a3bfe3426fa81f",
"text": "This study evaluated the factor structure of the Rosenberg Self-Esteem Scale (RSES) with a diverse sample of 1,248 European American, Latino, Armenian, and Iranian adolescents. Adolescents completed the 10-item RSES during school as part of a larger study on parental influences and academic outcomes. Findings suggested that method effects in the RSES are more strongly associated with negatively worded items across three diverse groups but also more pronounced among ethnic minority adolescents. Findings also suggested that accounting for method effects is necessary to avoid biased conclusions regarding cultural differences in selfesteem and how predictors are related to the RSES. Moreover, the two RSES factors (positive self-esteem and negative self-esteem) were differentially predicted by parenting behaviors and academic motivation. Substantive and methodological implications of these findings for crosscultural research on adolescent self-esteem are discussed.",
"title": ""
},
{
"docid": "f2a9d15d9b38738d563f9d9f9fa5d245",
"text": "Mobile cellular networks have become both the generators and carriers of massive data. Big data analytics can improve the performance of mobile cellular networks and maximize the revenue of operators. In this paper, we introduce a unified data model based on the random matrix theory and machine learning. Then, we present an architectural framework for applying the big data analytics in the mobile cellular networks. Moreover, we describe several illustrative examples, including big signaling data, big traffic data, big location data, big radio waveforms data, and big heterogeneous data, in mobile cellular networks. Finally, we discuss a number of open research challenges of the big data analytics in the mobile cellular networks.",
"title": ""
},
{
"docid": "232eabfb63f0b908ef3a64d0731ba358",
"text": "This paper reviews the potential of spin-transfer torque devices as an alternative to complementary metal-oxide-semiconductor for non-von Neumann and non-Boolean computing. Recent experiments on spin-transfer torque devices have demonstrated high-speed magnetization switching of nanoscale magnets with small current densities. Coupled with other properties, such as nonvolatility, zero leakage current, high integration density, we discuss that the spin-transfer torque devices can be inherently suitable for some unconventional computing models for information processing. We review several spintronic devices in which magnetization can be manipulated by current induced spin transfer torque and explore their applications in neuromorphic computing and reconfigurable memory-based computing.",
"title": ""
},
{
"docid": "dc6aafe2325dfdea5e758a30c90d8940",
"text": "When a query is submitted to a search engine, the search engine returns a dynamically generated result page containing the result records, each of which usually consists of a link to and/or snippet of a retrieved Web page. In addition, such a result page often also contains information irrelevant to the query, such as information related to the hosting site of the search engine and advertisements. In this paper, we present a technique for automatically producing wrappers that can be used to extract search result records from dynamically generated result pages returned by search engines. Automatic search result record extraction is very important for many applications that need to interact with search engines such as automatic construction and maintenance of metasearch engines and deep Web crawling. The novel aspect of the proposed technique is that it utilizes both the visual content features on the result page as displayed on a browser and the HTML tag structures of the HTML source file of the result page. Experimental results indicate that this technique can achieve very high extraction accuracy.",
"title": ""
},
{
"docid": "7b1dad9f2e8a2a454fe01bab4cca47a3",
"text": "We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.",
"title": ""
},
{
"docid": "ecd541de66690a9f2aa5341646a63742",
"text": "The purpose is to determine whether use of perioperative antibiotics for more than 24 h decreases the incidence of SSI in neonates and infants. We studied neonates and infants who had clean–contaminated or contaminated gastrointestinal operations from 1996 to 2006. Patient- and operation-related variables, duration of perioperative antibiotics, and SSI within 30 days were ascertained by retrospective chart review. In assessing the effects of antibiotic duration, we controlled for confounding by indication using standard covariate adjustment and propensity score matching. Among 732 operations, the incidence of SSI was 13 %. Using propensity score matching, the odds of SSI were similar (OR 1.1, 95 % CI 0.6–1.9) in patients who received ≤24 h of postoperative antibiotics compared to >24 h. No difference was also found in standard covariate adjustment. This multivariate model identified three independent predictors of SSI: preoperative infection (OR 3.9, 95 % CI 1.4–10.9) and re-operation through the same incision, both within 30 days (OR 3.5, 95 % CI 1.7–7.4) and later (OR 2.3, 95 % CI 1.4–3.8). In clean–contaminated and contaminated gastrointestinal operations, giving >24 h of postoperative antibiotics offered no protection against SSI. An adequately powered randomized clinical trial is needed to conclusively evaluate longer duration antibiotic prophylaxis.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "bf8f46e4c85f7e45879cee4282444f78",
"text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.",
"title": ""
},
{
"docid": "1b5a28c875cf49eadac7032d3dd6398f",
"text": "This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works.",
"title": ""
},
{
"docid": "77796f30d8d1604c459fb3f3fe841515",
"text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: marc.goetschalckx@isye.gatech.edu (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X",
"title": ""
},
{
"docid": "294d29b68d67d5be0d9fb88dd6329e34",
"text": "A semi-recurrent hybrid VAE-GAN model for generating sequential data is introduced. In order to consider the spatial correlation of the data in each frame of the generated sequence, CNNs are utilized in the encoder, generator, and discriminator. The subsequent frames are sampled from the latent distributions obtained by encoding the previous frames. As a result, the dependencies between the frames are maintained. Two testing frameworks for synthesizing a sequence with any number of frames are also proposed. The promising experimental results on piano music generation indicates the potential of the proposed framework in modelling other sequential data such as video.",
"title": ""
},
{
"docid": "b12049aac966497b17e075c2467151dd",
"text": "IV HLA-G and HLA-E alleles and RPL HLA-G and HLA-E gene polymorphism in patients with Idiopathic Recurrent Pregnancy Loss in Gaza strip",
"title": ""
},
{
"docid": "70a534183750abab91aa74710027a092",
"text": "We consider whether sentiment affects the profitability of momentum strategies. We hypothesize that news that contradicts investors’ sentiment causes cognitive dissonance, slowing the diffusion of such news. Thus, losers (winners) become underpriced under optimism (pessimism). Shortselling constraints may impede arbitraging of losers and thus strengthen momentum during optimistic periods. Supporting this notion, we empirically show that momentum profits arise only under optimism. An analysis of net order flows from small and large trades indicates that small investors are slow to sell losers during optimistic periods. Momentum-based hedge portfolios formed during optimistic periods experience long-run reversals. JFQ_481_2013Feb_Antoniou-Doukas-Subrahmanyam_ms11219_SH_FB_0122_DraftToAuthors.pdf",
"title": ""
},
{
"docid": "fb1c4605eb6663fdd04e9ac1579e7ff0",
"text": "We present an enhanced autonomous indoor navigation system for a stock quadcopter drone where all navigation commands are derived off-board on a base station. The base station processes the video stream transmitted from a forward-facing camera on the drone to determine the drone's physical disposition and trajectory in building hallways to derive steering commands that are sent to the drone. Off-board processing and the lack of on-board sensors for localizing the drone permits standard mid-range quadcopters to be used and conserves the limited power source on the quadcopter. We introduce improved and new techniques, compared to our prototype system [1], to maintain stable flights, estimate distance to hallway intersections and describe algorithms to stop the drone ahead of time and turn correctly at intersections.",
"title": ""
},
{
"docid": "a18da0c7d655fee44eebdf61c7371022",
"text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.",
"title": ""
},
{
"docid": "8734436dbd821d7a1bb0d2de97ba44d3",
"text": "What makes a face attractive and why do we have the preferences we do? Emergence of preferences early in development and cross-cultural agreement on attractiveness challenge a long-held view that our preferences reflect arbitrary standards of beauty set by cultures. Averageness, symmetry, and sexual dimorphism are good candidates for biologically based standards of beauty. A critical review and meta-analyses indicate that all three are attractive in both male and female faces and across cultures. Theorists have proposed that face preferences may be adaptations for mate choice because attractive traits signal important aspects of mate quality, such as health. Others have argued that they may simply be by-products of the way brains process information. Although often presented as alternatives, I argue that both kinds of selection pressures may have shaped our perceptions of facial beauty.",
"title": ""
},
{
"docid": "b02ebfa85f0948295b401152c0190d74",
"text": "SAGE has had a remarkable impact at Microsoft.",
"title": ""
}
] |
scidocsrr
|
20c49ce8a94be9f93d4a86ed7e1f84b6
|
Context-Aware Correlation Filter Tracking
|
[
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "aee250663a05106c4c0fad9d0f72828c",
"text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.",
"title": ""
}
] |
[
{
"docid": "49736d49ee7b777523064efcd99c5cbb",
"text": "Immune checkpoint antagonists (CTLA-4 and PD-1/PD-L1) and CAR T-cell therapies generate unparalleled durable responses in several cancers and have firmly established immunotherapy as a new pillar of cancer therapy. To extend the impact of immunotherapy to more patients and a broader range of cancers, targeting additional mechanisms of tumor immune evasion will be critical. Adenosine signaling has emerged as a key metabolic pathway that regulates tumor immunity. Adenosine is an immunosuppressive metabolite produced at high levels within the tumor microenvironment. Hypoxia, high cell turnover, and expression of CD39 and CD73 are important factors in adenosine production. Adenosine signaling through the A2a receptor expressed on immune cells potently dampens immune responses in inflamed tissues. In this article, we will describe the role of adenosine signaling in regulating tumor immunity, highlighting potential therapeutic targets in the pathway. We will also review preclinical data for each target and provide an update of current clinical activity within the field. Together, current data suggest that rational combination immunotherapy strategies that incorporate inhibitors of the hypoxia-CD39-CD73-A2aR pathway have great promise for further improving clinical outcomes in cancer patients.",
"title": ""
},
{
"docid": "721ff703dfafad6b1b330226c36ed641",
"text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.",
"title": ""
},
{
"docid": "6420f394cb02e9415b574720a9c64e7f",
"text": "Interleaved power converter topologies have received increasing attention in recent years for high power and high performance applications. The advantages of interleaved boost converters include increased efficiency, reduced size, reduced electromagnetic emission, faster transient response, and improved reliability. The front end inductors in an interleaved boost converter are magnetically coupled to improve electrical performance and reduce size and weight. Compared to a direct coupled configuration, inverse coupling provides the advantages of lower inductor ripple current and negligible dc flux levels in the core. In this paper, we explore the possible advantages of core geometry on core losses and converter efficiency. Analysis of FEA simulation and empirical characterization data indicates a potential superiority of a square core, with symmetric 45deg energy storage corner gaps, for providing both ac flux balance and maximum dc flux cancellation when wound in an inverse coupled configuration.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "8c0588538b1b04193e80ef5ce5ad55a7",
"text": "Unlike traditional bipolar constrained liners, the Osteonics Omnifit constrained acetabular insert is a tripolar device, consisting of an inner bipolar bearing articulating within an outer, true liner. Every reported failure of the Omnifit tripolar implant has been by failure at the shell-bone interface (Type I failure), failure at the shell-liner interface (Type II failure), or failure of the locking mechanism resulting in dislocation of the bipolar-liner interface (Type III failure). In this report we present two cases of failure of the Omnifit tripolar at the bipolar-femoral head interface. To our knowledge, these are the first reported cases of failure at the bipolar-femoral head interface (Type IV failure). In addition, we described the first successful closed reduction of a Type IV failure.",
"title": ""
},
{
"docid": "536c739e6f0690580568a242e1d65ef3",
"text": "Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. However, the efficiency of an IDS depends primarily on both its configuration and its precision. The large amount of network traffic that needs to be analyzed, in addition to the increase in attacks’ sophistication, renders the optimization of intrusion detection an important requirement for infrastructure security, and a very active research subject. In the state of the art, a number of approaches have been proposed to improve the efficiency of intrusion detection and response systems. In this article, we review the works relying on decision-making techniques focused on game theory and Markov decision processes to analyze the interactions between the attacker and the defender, and classify them according to the type of the optimization problem they address. While these works provide valuable insights for decision-making, we discuss the limitations of these solutions as a whole, in particular regarding the hypotheses in the models and the validation methods. We also propose future research directions to improve the integration of game-theoretic approaches into IDS optimization techniques.",
"title": ""
},
{
"docid": "048cc782baeec3a7f46ef5ee7abf0219",
"text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.",
"title": ""
},
{
"docid": "a2f36e0f8abaa07124d446f6aa870491",
"text": "We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given images and partial and/or noisy depth and semantic data. We formulate this objective of reconstructing one or more types of scene data using a Multi-modal stacked Auto-Encoder. We show that suitably designed Multi-modal Auto-Encoders can solve the depth estimation and the semantic segmentation problems simultaneously, in the partial or even complete absence of some of the input modalities. We demonstrate our method using the outdoor dataset KITTI that includes LIDAR and stereo cameras. Our results show that as a means to estimate depth from a single image, our method is comparable to the state-of-the-art, and can run in real time (i.e., less than 40ms per frame). But we also show that our method has a significant advantage over other methods in that it can seamlessly use additional data that may be available, such as a sparse point-cloud and/or incomplete coarse semantic labels.",
"title": ""
},
{
"docid": "aa30fc0f921509b1f978aeda1140ffc0",
"text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.",
"title": ""
},
{
"docid": "d7eb92756c8c3fb0ab49d7b101d96343",
"text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.",
"title": ""
},
{
"docid": "74ff09a1d3ca87a0934a1b9095c282c4",
"text": "The cancer metastasis suppressor protein KAI1/CD82 is a member of the tetraspanin superfamily. Recent studies have demonstrated that tetraspanins are palmitoylated and that palmitoylation contributes to the organization of tetraspanin webs or tetraspanin-enriched microdomains. However, the effect of palmitoylation on tetraspanin-mediated cellular functions remains obscure. In this study, we found that tetraspanin KAI1/CD82 was palmitoylated when expressed in PC3 metastatic prostate cancer cells and that palmitoylation involved all of the cytoplasmic cysteine residues proximal to the plasma membrane. Notably, the palmitoylation-deficient KAI1/CD82 mutant largely reversed the wild-type KAI1/CD82's inhibitory effects on migration and invasion of PC3 cells. Also, palmitoylation regulates the subcellular distribution of KAI1/CD82 and its association with other tetraspanins, suggesting that the localized interaction of KAI1/CD82 with tetraspanin webs or tetraspanin-enriched microdomains is important for KAI1/CD82's motility-inhibitory activity. Moreover, we found that KAI1/CD82 palmitoylation affected motility-related subcellular events such as lamellipodia formation and actin cytoskeleton organization and that the alteration of these processes likely contributes to KAI1/CD82's inhibition of motility. Finally, the reversal of cell motility seen in the palmitoylation-deficient KAI1/CD82 mutant correlates with regaining of p130(CAS)-CrkII coupling, a signaling step important for KAI1/CD82's activity. Taken together, our results indicate that palmitoylation is crucial for the functional integrity of tetraspanin KAI1/CD82 during the suppression of cancer cell migration and invasion.",
"title": ""
},
{
"docid": "136a2f401b3af00f0f79b991ab65658f",
"text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.",
"title": ""
},
{
"docid": "10423f367850761fd17cf1b146361f34",
"text": "OBJECTIVE\nDetection and characterization of microcalcification clusters in mammograms is vital in daily clinical practice. The scope of this work is to present a novel computer-based automated method for the characterization of microcalcification clusters in digitized mammograms.\n\n\nMETHODS AND MATERIAL\nThe proposed method has been implemented in three stages: (a) the cluster detection stage to identify clusters of microcalcifications, (b) the feature extraction stage to compute the important features of each cluster and (c) the classification stage, which provides with the final characterization. In the classification stage, a rule-based system, an artificial neural network (ANN) and a support vector machine (SVM) have been implemented and evaluated using receiver operating characteristic (ROC) analysis. The proposed method was evaluated using the Nijmegen and Mammographic Image Analysis Society (MIAS) mammographic databases. The original feature set was enhanced by the addition of four rule-based features.\n\n\nRESULTS AND CONCLUSIONS\nIn the case of Nijmegen dataset, the performance of the SVM was Az=0.79 and 0.77 for the original and enhanced feature set, respectively, while for the MIAS dataset the corresponding characterization scores were Az=0.81 and 0.80. Utilizing neural network classification methodology, the corresponding performance for the Nijmegen dataset was Az=0.70 and 0.76 while for the MIAS dataset it was Az=0.73 and 0.78. Although the obtained high classification performance can be successfully applied to microcalcification clusters characterization, further studies must be carried out for the clinical evaluation of the system using larger datasets. The use of additional features originating either from the image itself (such as cluster location and orientation) or from the patient data may further improve the diagnostic value of the system.",
"title": ""
},
{
"docid": "813a0d47405d133263deba0da6da27a8",
"text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.",
"title": ""
},
{
"docid": "e59b203f3b104553a84603240ea467eb",
"text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.",
"title": ""
},
{
"docid": "06c3f32f07418575c700e2f0925f4398",
"text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.",
"title": ""
},
{
"docid": "a636f977eb29b870cefe040f3089de44",
"text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "be079999e630df22254e7aa8a9ecdcae",
"text": "Strokes are one of the leading causes of death and disability in the UK. There are two main types of stroke: ischemic and hemorrhagic, with the majority of stroke patients suffering from the former. During an ischemic stroke, parts of the brain lose blood supply, and if not treated immediately, can lead to irreversible tissue damage and even death. Ischemic lesions can be detected by diffusion weighted magnetic resonance imaging (DWI), but localising and quantifying these lesions can be a time consuming task for clinicians. Work has already been done in training neural networks to segment these lesions, but these frameworks require a large amount of manually segmented 3D images, which are very time consuming to create. We instead propose to use past examinations of stroke patients which consist of DWIs, corresponding radiological reports and diagnoses in order to develop a learning framework capable of localising lesions. This is motivated by the fact that the reports summarise the presence, type and location of the ischemic lesion for each patient, and thereby provide more context than a single diagnostic label. Acute lesions prediction is aided by an attention mechanism which implicitly learns which regions within the DWI are most relevant to the classification.",
"title": ""
}
] |
scidocsrr
|
4e97169528430631823341734e2375ec
|
Rich Image Captioning in the Wild
|
[
{
"docid": "6a1e614288a7977b72c8037d9d7725fb",
"text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"title": ""
},
{
"docid": "30260d1a4a936c79e6911e1e91c3a84a",
"text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.",
"title": ""
}
] |
[
{
"docid": "3a7a7fa5e41a6195ca16f172b72f89a1",
"text": "To integrate unpredictable human behavior in the assessment of active and passive pedestrian safety systems, we introduce a virtual reality (VR)-based pedestrian simulation system. The device uses the Xsens Motion Capture platform and can be used without additional infrastructure. To show the systems applicability for pedestrian behavior studies, we conducted a pilot study evaluating the degree of realism such a system can achieve in a typical unregulated pedestrian crossing scenario. Six participants had to estimate vehicle speeds and distances in four scenarios with varying gaps between vehicles. First results indicate an acceptable level of realism so that the device can be used for further user studies addressing pedestrian behavior, pedestrian interaction with (automated) vehicles, risk assessment and investigation of the pre-crash phase without the risk of injuries.",
"title": ""
},
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "b82c7c8f36ea16c29dfc5fa00a58b229",
"text": "Green cloud computing has become a major concern in both industry and academia, and efficient scheduling approaches show promising ways to reduce the energy consumption of cloud computing platforms while guaranteeing QoS requirements of tasks. Existing scheduling approaches are inadequate for realtime tasks running in uncertain cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. In this paper, we address this issue. We introduce an interval number theory to describe the uncertainty of the computing environment and a scheduling architecture to mitigate the impact of uncertainty on the task scheduling quality for a cloud data center. Based on this architecture, we present a novel scheduling algorithm (PRS) that dynamically exploits proactive and reactive scheduling methods, for scheduling real-time, aperiodic, independent tasks. To improve energy efficiency, we propose three strategies to scale up and down the system’s computing resources according to workload to improve resource utilization and to reduce energy consumption for the cloud data center. We conduct extensive experiments to compare PRS with four typical baseline scheduling algorithms. The experimental results show that PRS performs better than those algorithms, and can effectively improve the performance of a cloud data center.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "e2606242fcc89bfcf5c9c4cd71dd2c18",
"text": "This letter introduces the class of generalized punctured convolutional codes (GPCCs), which is broader than and encompasses the class of the standard punctured convolutional codes (PCCs). A code in this class can be represented by a trellis module, the GPCC trellis module, whose topology resembles that of the minimal trellis module. he GPCC trellis module for a PCC is isomorphic to the minimal trellis module. A list containing GPCCs with better distance spectrum than the best known PCCs with same code rate and trellis complexity is presented.",
"title": ""
},
{
"docid": "316e4fa32d0b000e6f833d146a9e0d80",
"text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.",
"title": ""
},
{
"docid": "b058bbc1485f99f37c0d72b960dd668b",
"text": "In two experiments short-term forgetting was investigated in a short-term cued recall task designed to examine proactive interference effects. Mixed modality study lists were tested at varying retention intervals using verbal and non-verbal distractor activities. When an interfering foil was read aloud and a target item read silently, strong PI effects were observed for both types of distractor activity. When the target was read aloud and followed by a verbal distractor activity, weak PI effects emerged. However, when a target item was read aloud and non-verbal distractor activity filled the retention interval, performance was immune to the effects of PI for at least eight seconds. The results indicate that phonological representations of items read aloud still influence performance after 15 seconds of distractor activity. Short-term Forgetting 3 Determinants of Short-term Forgetting: Decay, Retroactive Interference or Proactive Interference? Most current models of short-term memory assert that to-be-remembered items are represented in terms of easily degraded phonological representations. However, there is disagreement on how the traces become degraded. Some propose that trace degradation is due to decay brought about by the prevention of rehearsal (Baddeley, 1986; Burgess & Hitch, 1992; 1996), or a switch in attention (Cowan, 1993); others attribute degradation to retroactive interference (RI) from other list items (Nairne, 1990; Tehan & Fallon; in press; Tehan & Humphreys, 1998). We want to add proactive interference (PI) to the possible causes of short-term forgetting, and by showing how PI effects change as a function of the type of distractor task employed during a filled retention interval, we hope to evaluate the causes of trace degradation. By manipulating the type of distractor activity in a brief retention interval it is possible to test some of the assumptions about decay versus interference explanations of short-term forgetting. The decay position is quite straightforward. If rehearsal is prevented, then the trace should decay; the type of distractor activity should be immaterial as long as rehearsal is prevented. From the interference perspective both the Feature Model (Nairne, 1990) and the Tehan and Humphreys (1995,1998) connectionist model predict that there should be occasions where very little forgetting occurs. In the Feature Model items are represented as sets of modality dependent and modality independent features. Forgetting occurs when adjacent list items have common features. Some of the shared features of the first item are overwritten by the latter item, thereby producing a trace that bears only partial resemblance to the Short-term Forgetting 4 original item. One occasion in which interference would be minimized is when an auditory list is followed by a non-auditory distractor task. The modality dependent features of the list items would not be overwritten or degraded by the distractor activity because the modality dependent features of the list and distractor items are different to each other. By the same logic, a visually presented list should not be affected by an auditory distractor task, since modality specific features are again different in each case. In the Tehan and Humphreys (1995) approach, presentation modality is related to the strength of phonological representations that support recall. They assume that auditory activity produces stronger representations than does visual activity. Thus this model also predicts that when a list is presented auditorially, it will not be much affected by subsequent non-auditory distractor activity. However, in the case of a visual list with auditory distraction, the assumption would be that interference would be maximised. The phonological codes for the list items would be relatively weak in the first instance and a strong source of auditory retroactive interference follows. This prediction is the opposite of that derived from the Feature Model. Since PI effects appear to be sensitive to retention interval effects (Tehan & Humphreys, 1995; Wickens, Moody & Dow, 1981), we have chosen to employ a PI task to explore these differential predictions. We have recently developed a short-term cued recall task in which PI can easily be manipulated (Tehan & Humphreys, 1995; 1996; 1998). In this task, participants study a series of trials in which items are presented in blocks of four items with each trial consisting of either one or two blocks. Each trial has a target item that is an instance of either a taxonomic or rhyme category, and the category label is presented at test as a retrieval cue. The two-block trials are the important trials Short-term Forgetting 5 because it is in these trials that PI is manipulated. In these trials the two blocks are presented under directed forgetting instructions. That is, once participants find out that it is a two-block trial they are to forget the first block and remember the second block because the second block contains the target item. On control trials, all nontarget items in both blocks are unrelated to the target. On interference trials, a foil that is related to the target is embedded among three other to-be-forgotten fillers in the first block and the target is embedded among three unrelated filler items in the second block. Following the presentation of the second block the category cue is presented and subjects are asked to recall the word from the second block that is an instance of that category. Using this task we have been able to show that when taxonomic categories are used on an immediate test (e.g., dog is the foil, cat is the target and ANIMAL is the cue), performance is immune to PI. However, when recall is tested after a 2-second filled retention interval, PI effects are observed; target recall is depressed and the foil is often recalled instead of the target. In explaining these results, Tehan and Humphreys (1995) assumed that items were represented in terms of sets of features. The representation of an item was seen to involve both semantic and phonological features, with the phonological features playing a dominant role in item recall. They assumed that the cue would elicit the representations of the two items in the list, and that while the semantic features of both target and foil would be available, only the target would have active phonological features. Thus on an immediate test, knowing that the target ended in -at would make the task of discriminating between cat and dog relatively easy. On a delayed test they assumed that all phonological features were inactive and the absence of phonological information would make discrimination more difficult. Short-term Forgetting 6 A corollary of the Tehan and Humphreys (1995) assumption is that if phonological codes could be provided for a non-rhyming foil, then discrimination should again be problematic. Presentation modality is one variable that appears to produce differences in strength of phonological codes with reading aloud producing stronger representations than reading silently. Tehan and Humphreys (Experiment 5) varied the modality of the two blocks such that participants either read the first block silently and then read the second block aloud or vice versa. In the silent aloud condition performance was immune to PI. The assumption was that the phonological representation of the target item in the second block was very strong with the result that there were no problems in discrimination. However, PI effects were present in the aloud-silent condition. The phonological representation of the read-aloud foil appeared to serve as a strong source of competition to the read-silently target item. All the above research has been based on the premise that phonological representations for visually presented items are weak and rapidly lose their ability to support recall. This assumption seems tenable given that phonological similarity effects and phonological intrusion effects in serial recall are attenuated rapidly with brief periods of distractor activity (Conrad, 1967; Estes, 1973; Tehan & Humphreys, 1995). The cued recall experiments that have used a filled retention interval have always employed silent visual presentation of the study list and required spoken shadowing of the distractor items. That is, the phonological representations of both target and foil are assumed to be quite weak and the shadowing task would provide a strong source of interference. These are likely to be the conditions that produce maximum levels of PI. The patterns of PI may change with mixed modality study lists and alternative forms of distractor activity. For example, given a strong phonological representation of the target, weak representations of the foil and a weak source of Short-term Forgetting 7 retroactive interference, it might be possible to observe immunity to PI on a delayed test. The following experiments explore the relationship between presentation modality, distractor modality and PI Experiment 1 The Tehan and Humphreys (1995) mixed modality experiment indicated that PI effects were sensitive to the modalities of the first and second block of items. In the current study we use mixed modality study lists but this time include a two-second retention interval, the same as that used by Tehan and Humphreys. However, the modality of the distractor activity was varied as well. Participants either had to respond aloud verbally or make a manual response that did not involve any verbal output. From the Tehan and Humphreys perspective the assumption made is that the verbal distractor activity will produce more disruption to the phonological representation of the target item than will a non-verbal distractor activity and the PI will be observed. However, it is quite possible that with silent-aloud presentation and a non-verbal distractor activity immunity to PI might be maintained across a twosecond retention interval. From the Nairne perspective, interfe",
"title": ""
},
{
"docid": "b1239f2e9bfec604ac2c9851c8785c09",
"text": "BACKGROUND\nDecoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG) and electroencephalogram (EEG). Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear.\n\n\nMETHODS\nHere we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm.\n\n\nRESULTS\nThe average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4 Hz band but also oscillatory rhythms in 24-28 Hz band may carry the information of hand velocity.\n\n\nCONCLUSIONS\nThese results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.",
"title": ""
},
{
"docid": "1fb87bc370023dc3fdfd9c9097288e71",
"text": "Protein is essential for living organisms, but digestibility of crude protein is poorly understood and difficult to predict. Nitrogen is used to estimate protein content because nitrogen is a component of the amino acids that comprise protein, but a substantial portion of the nitrogen in plants may be bound to fiber in an indigestible form. To estimate the amount of crude protein that is unavailable in the diets of mountain gorillas (Gorilla beringei) in Bwindi Impenetrable National Park, Uganda, foods routinely eaten were analyzed to determine the amount of nitrogen bound to the acid-detergent fiber residue. The amount of fiber-bound nitrogen varied among plant parts: herbaceous leaves 14.5+/-8.9% (reported as a percentage of crude protein on a dry matter (DM) basis), tree leaves (16.1+/-6.7% DM), pith/herbaceous peel (26.2+/-8.9% DM), fruit (34.7+/-17.8% DM), bark (43.8+/-15.6% DM), and decaying wood (85.2+/-14.6% DM). When crude protein and available protein intake of adult gorillas was estimated over a year, 15.1% of the dietary crude protein was indigestible. These results indicate that the proportion of fiber-bound protein in primate diets should be considered when estimating protein intake, food selection, and food/habitat quality.",
"title": ""
},
{
"docid": "60e56a59ecbdee87005407ed6a117240",
"text": "The visionary Steve Jobs said, “A lot of times, people don’t know what they want until you show it to them.” A powerful recommender system not only shows people similar items, but also helps them discover what they might like, and items that complement what they already purchased. In this paper, we attempt to instill a sense of “intention” and “style” into our recommender system, i.e., we aim to recommend items that are visually complementary with those already consumed. By identifying items that are visually coherent with a query item/image, our method facilitates exploration of the long tail items, whose existence users may be even unaware of. This task is formulated only recently by Julian et al. [1], with the input being millions of item pairs that are frequently viewed/bought together, entailing noisy style coherence. In the same work, the authors proposed a Mahalanobisbased transform to discriminate a given pair to be sharing a same style or not. Despite its success, we experimentally found that it’s only able to recommend items on the margin of different clusters, which leads to limited coverage of the items to be recommended. Another limitation is it totally ignores the existence of taxonomy information that is ubiquitous in many datasets like Amazon the authors experimented with. In this report, we propose two novel methods that make use of the hierarchical category metadata to overcome the limitations identified above. The main contributions are listed as following.",
"title": ""
},
{
"docid": "0c420c064519e15e071660c750c0b7e3",
"text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.",
"title": ""
},
{
"docid": "4ca7e1893c0ab71d46af4954f7daf58e",
"text": "Identifying coordinate transformations that make strongly nonlinear dynamics approximately linear has the potential to enable nonlinear prediction, estimation, and control using linear theory. The Koopman operator is a leading data-driven embedding, and its eigenfunctions provide intrinsic coordinates that globally linearize the dynamics. However, identifying and representing these eigenfunctions has proven challenging. This work leverages deep learning to discover representations of Koopman eigenfunctions from data. Our network is parsimonious and interpretable by construction, embedding the dynamics on a low-dimensional manifold. We identify nonlinear coordinates on which the dynamics are globally linear using a modified auto-encoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra. Our framework parametrizes the continuous frequency using an auxiliary network, enabling a compact and efficient embedding, while connecting our models to decades of asymptotics. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings. It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control. Here the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.",
"title": ""
},
{
"docid": "eeff1f2e12e5fc5403be8c2d7ca4d10c",
"text": "Optical Character Recognition (OCR) systems have been effectively developed for the recognition of printed script. The accuracy of OCR system mainly depends on the text preprocessing and segmentation algorithm being used. When the document is scanned it can be placed in any arbitrary angle which would appear on the computer monitor at the same angle. This paper addresses the algorithm for correction of skew angle generated in scanning of the text document and a novel profile based method for segmentation of printed text which separates the text in document image into lines, words and characters. Keywords—Skew correction, Segmentation, Text preprocessing, Horizontal Profile, Vertical Profile.",
"title": ""
},
{
"docid": "ce8914e02eeed8fb228b5b2950cf87de",
"text": "Different alternatives to detect and diagnose faults in induction machines have been proposed and implemented in the last years. The technology of artificial neural networks has been successfully used to solve the motor incipient fault detection problem. The characteristics, obtained by this technique, distinguish them from the traditional ones, which, in most cases, need that the machine which is being analyzed is not working to do the diagnosis. This paper reviews an artificial neural network (ANN) based technique to identify rotor faults in a three-phase induction motor. The main types of faults considered are broken bar and dynamic eccentricity. At light load, it is difficult to distinguish between healthy and faulty rotors because the characteristic broken rotor bar fault frequencies are very close to the fundamental component and their amplitudes are small in comparison. As a result, detection of the fault and classification of the fault severity under light load is almost impossible. In order to overcome this problem, the detection of rotor faults in induction machines is done by analysing the starting current using a newly developed quantification technique based on artificial neural networks.",
"title": ""
},
{
"docid": "33b4ba89053ed849d23758f6e3b06b09",
"text": "We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.",
"title": ""
},
{
"docid": "2aae53713324b297f0e145ef8d808ce9",
"text": "In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on Theory of Computation, 1993, pp. 11–20, SIAM J. Comput., 26 (1997), pp. 1411–1473]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in P#P and PSPACE. A potentially practical issue of designing “machine independent” quantum programs is also addressed. A single (“almost universal”) quantum algorithm based on Shor’s method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to the programmer and the device builder.",
"title": ""
},
{
"docid": "f617b8b5c2c5fc7829cbcd0b2e64ed2d",
"text": "This paper proposes a novel lifelong learning (LL) approach to sentiment classification. LL mimics the human continuous learning process, i.e., retaining the knowledge learned from past tasks and use it to help future learning. In this paper, we first discuss LL in general and then LL for sentiment classification in particular. The proposed LL approach adopts a Bayesian optimization framework based on stochastic gradient descent. Our experimental results show that the proposed method outperforms baseline methods significantly, which demonstrates that lifelong learning is a promising research direction.",
"title": ""
},
{
"docid": "925709dfe0d0946ca06d05b290f2b9bd",
"text": "Mentalization, operationalized as reflective functioning (RF), can play a crucial role in the psychological mechanisms underlying personality functioning. This study aimed to: (a) study the association between RF, personality disorders (cluster level) and functioning; (b) investigate whether RF and personality functioning are influenced by (secure vs. insecure) attachment; and (c) explore the potential mediating effect of RF on the relationship between attachment and personality functioning. The Shedler-Westen Assessment Procedure (SWAP-200) was used to assess personality disorders and levels of psychological functioning in a clinical sample (N = 88). Attachment and RF were evaluated with the Adult Attachment Interview (AAI) and Reflective Functioning Scale (RFS). Findings showed that RF had significant negative associations with cluster A and B personality disorders, and a significant positive association with psychological functioning. Moreover, levels of RF and personality functioning were influenced by attachment patterns. Finally, RF completely mediated the relationship between (secure/insecure) attachment and adaptive psychological features, and thus accounted for differences in overall personality functioning. Lack of mentalization seemed strongly associated with vulnerabilities in personality functioning, especially in patients with cluster A and B personality disorders. These findings provide support for the development of therapeutic interventions to improve patients' RF.",
"title": ""
},
{
"docid": "9a1d6be6fbce508e887ee4e06a932cd2",
"text": "For ranked search in encrypted cloud data, order preserving encryption (OPE) is an efficient tool to encrypt relevance scores of the inverted index. When using deterministic OPE, the ciphertexts will reveal the distribution of relevance scores. Therefore, Wang et al. proposed a probabilistic OPE, called one-to-many OPE, for applications of searchable encryption, which can flatten the distribution of the plaintexts. In this paper, we proposed a differential attack on one-to-many OPE by exploiting the differences of the ordered ciphertexts. The experimental results show that the cloud server can get a good estimate of the distribution of relevance scores by a differential attack. Furthermore, when having some background information on the outsourced documents, the cloud server can accurately infer the encrypted keywords using the estimated distributions.",
"title": ""
},
{
"docid": "460e8daf5dfc9e45c3ade5860aa9cc57",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] |
scidocsrr
|
34105146cfbde5353c1ec63e2112fcfb
|
Multi-Label Learning with Posterior Regularization
|
[
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "f59a7b518f5941cd42086dc2fe58fcea",
"text": "This paper contributes a novel algorithm for effective and computationally efficient multilabel classification in domains with large label sets L. The HOMER algorithm constructs a Hierarchy Of Multilabel classifiERs, each one dealing with a much smaller set of labels compared to L and a more balanced example distribution. This leads to improved predictive performance along with linear training and logarithmic testing complexities with respect to |L|. Label distribution from parent to children nodes is achieved via a new balanced clustering algorithm, called balanced k means.",
"title": ""
}
] |
[
{
"docid": "4d7c0222317fbd866113e1a244a342f3",
"text": "A simple method of \"tuning up\" a multiple-resonant-circuit filter quickly and exactly is demonstrated. The method may be summarized as follows: Very loosely couple a detector to the first resonator of the filter; then, proceeding in consecutive order, tune all odd-numbered resonators for maximum detector output, and all even-numbered resonators for minimum detector output (always making sure that the resonator immediately following the one to be resonated is completely detuned). Also considered is the correct adjustment of the two other types of constants in a filter. Filter constants can always be reduced to only three fundamental types: f0, dr(1/Qr), and Kr(r+1). This is true whether a lumped-element 100-kc filter or a distributed-element 5,000-mc unit is being considered. dr is adjusted by considering the rth resonator as a single-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the 3-db-down-points to the required value. Kr(r+1) is adjusted by considering the rth and (r+1)th adjacent resonators as a double-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the resulting response peaks to the required value. Finally, all the required values for K and Q are given for an n-resonant-circuit filter that will produce the response (Vp/V)2=1 +(Δf/Δf3db)2n.",
"title": ""
},
{
"docid": "36ed684e39877873407efb809f3cd1dc",
"text": "A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process.",
"title": ""
},
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "d395193924613f6818511650d24cf9ae",
"text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.",
"title": ""
},
{
"docid": "2e5981a41d13ee2d588ee0e9fe04e1ec",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "a5ee673c895bac1a616bb51439461f5f",
"text": "OBJECTIVES\nTo summarise logistical aspects of recently completed systematic reviews that were registered in the International Prospective Register of Systematic Reviews (PROSPERO) registry to quantify the time and resources required to complete such projects.\n\n\nDESIGN\nMeta-analysis.\n\n\nDATA SOURCES AND STUDY SELECTION\nAll of the 195 registered and completed reviews (status from the PROSPERO registry) with associated publications at the time of our search (1 July 2014).\n\n\nDATA EXTRACTION\nAll authors extracted data using registry entries and publication information related to the data sources used, the number of initially retrieved citations, the final number of included studies, the time between registration date to publication date and number of authors involved for completion of each publication. Information related to funding and geographical location was also recorded when reported.\n\n\nRESULTS\nThe mean estimated time to complete the project and publish the review was 67.3 weeks (IQR=42). The number of studies found in the literature searches ranged from 27 to 92 020; the mean yield rate of included studies was 2.94% (IQR=2.5); and the mean number of authors per review was 5, SD=3. Funded reviews took significantly longer to complete and publish (mean=42 vs 26 weeks) and involved more authors and team members (mean=6.8 vs 4.8 people) than those that did not report funding (both p<0.001).\n\n\nCONCLUSIONS\nSystematic reviews presently take much time and require large amounts of human resources. In the light of the ever-increasing volume of published studies, application of existing computing and informatics technology should be applied to decrease this time and resource burden. We discuss recently published guidelines that provide a framework to make finding and accessing relevant literature less burdensome.",
"title": ""
},
{
"docid": "3613ae9cfcadee0053a270fe73c6e069",
"text": "Depth-map merging approaches have become more and more popular in multi-view stereo (MVS) because of their flexibility and superior performance. The quality of depth map used for merging is vital for accurate 3D reconstruction. While traditional depth map estimation has been performed in a discrete manner, we suggest the use of a continuous counterpart. In this paper, we first integrate silhouette information and epipolar constraint into the variational method for continuous depth map estimation. Then, several depth candidates are generated based on a multiple starting scales (MSS) framework. From these candidates, refined depth maps for each view are synthesized according to path-based NCC (normalized cross correlation) metric. Finally, the multiview depth maps are merged to produce 3D models. Our algorithm excels at detail capture and produces one of the most accurate results among the current algorithms for sparse MVS datasets according to the Middlebury benchmark. Additionally, our approach shows its outstanding robustness and accuracy in free-viewpoint video scenario.",
"title": ""
},
{
"docid": "eb9459d0eb18f0e49b3843a6036289f9",
"text": "Experimental research has had a long tradition in psychology and education. When psychology emerged as an infant science during the 1900s, it modeled its research methods on the established paradigms of the physical sciences, which for centuries relied on experimentation to derive principals and laws. Subsequent reliance on experimental approaches was strengthened by behavioral approaches to psychology and education that predominated during the first half of this century. Thus, usage of experimentation in educational technology over the past 40 years has been influenced by developments in theory and research practices within its parent disciplines. In this chapter, we examine practices, issues, and trends related to the application of experimental research methods in educational technology. The purpose is to provide readers with sufficient background to understand and evaluate experimental designs encountered in the literature and to identify designs that will effectively address questions of interest in their own research. In an introductory section, we define experimental research, differentiate it from alternative approaches, and identify important concepts in its use (e.g., internal vs. external validity). We also suggest procedures for conducting experimental studies and publishing them in educational technology research journals. Next, we analyze uses of experimental methods by instructional researchers, extending the analyses of three decades ago by Clark and Snow (1975). In the concluding section, we turn to issues in using experimental research in educational technology, to include balancing internal and external validity, using multiple outcome measures to assess learning processes and products, using item responses vs. aggregate scores as dependent variables, reporting effect size as a complement to statistical significance, and media replications vs. media comparisons.",
"title": ""
},
{
"docid": "d2c4693856ae88c3c49b5fc7c4a7baf7",
"text": "In Jesuit universities, laypersons, who come from the same or different faith backgrounds or traditions, are considered as collaborators in mission. The Jesuits themselves support the contributions of the lay partners in realizing the mission of the Society of Jesus and recognize the important role that they play in education. This study aims to investigate and generate particular notions and understandings of lived experiences of being a lay partner in Jesuit universities in the Philippines, particularly those involved in higher education. Using the qualitative approach as introduced by grounded theorist Barney Glaser, the lay partners’ concept of being a partner, as lived in higher education, is generated systematically from the data collected in the field primarily through in-depth interviews, field notes and observations. Glaser’s constant comparative method of analysis of data is used going through the phases of open coding, theoretical coding, and selective coding from memoing to theoretical sampling to sorting and then writing. In this study, Glaser’s grounded theory as a methodology will provide a substantial insight into and articulation of the layperson’s actual experience of being a partner of the Jesuits in education. Such articulation provides a phenomenological approach or framework to an understanding of the meaning and core characteristics of JesuitLay partnership in Jesuit educational institution of higher learning in the country. This study is expected to provide a framework or model for lay partnership in academic institutions that have the same practice of having lay partners in mission. Keywords—Grounded theory, Jesuit mission in higher education, lay partner, lived experience. I. BACKGROUND AND INTRODUCTION HE Second Vatican Council document of the Roman Catholic Church establishes and defines the vocation and mission of lay members of the Church. It says that regardless of status, “all laypersons are called and obliged to engage in the apostolate of being laborers in the vineyard of the Lord, the world, to serve the Kingdom of God” [1, par.16]. Christifideles Laici, a post-synodal apostolic exhortation of Pope John Paul II, renews and reaffirms this same apostolic role of lay people in the Catholic Church saying that “[t]he call is a concern not only of Pastors, clergy, and men and women religious. The call is addressed to everyone: lay people as well are personally called by the Lord, from whom they receive a mission on behalf of the Church and the world” [2, par.2]. Catholic universities, “being born from the heart of the Church” [2, p.1] follow the same orientation and mission in affirming the apostolic roles that lay men and women could exercise in sharing with the works of the church on deepening faith and spirituality [3, par.25]. Janet Badong-Badilla is with the De La Salle University, Philippines (email: janet.badilla@yahoo.com). In Jesuit Catholic universities, the laypersons’ sense of mission and passion is recognized. The Jesuits say that “the call they have received is a call shared by them all together, Jesuits and lay” [4, par. 3]. Lay-Jesuit collaboration is in fact among the 28 distinctive characteristics of Jesuit education (CJE) and a positive goal that a Jesuit school tries to achieve in response to the Second Vatican Council and to recent General Congregations of the Society of Jesus [5]. In the Philippines, there are five Jesuit and Catholic universities that operate under the charism and educational principles of St. Ignatius of Loyola, the founder of the Society of Jesus. In a Jesuit university, the work in education is linked with Ignatian spirituality that inspires it [6, par. 13]. In managing human resources in a Jesuit school, the CJE document says that as much as the administration is able, “people chosen to join the educational community will be men and women capable of understanding its distinctive nature and of contributing to the implementation of characteristics that result from the Ignatian vision” [6, par. 122]. Laypersons in Jesuit universities, then, are expected to be able to share and carry on the kind of education that is based on the Ignatian tradition and spirituality. Fr. Pedro Arrupe, S.J., the former superior general of the Society of Jesus, in his closing session to the committee working on the document on the Characteristics of Jesuit Education, said that a Jesuit school, “if it is an authentic Jesuit school,” should manifest “Ignacianidad”: “...if our operation of the school flows out of the strengths drawn from our own specific charisma, if we emphasize our essential characteristics and our basic options then the education which our students receive should give them a certain \"Ignacianidad” [5, par. 3]. For Arrupe, Ignacianidad or the spirituality inspired by St. Ignatius is “a logical consequence of the fact that Jesuit schools live and operate out of its own charism” [5, par. 3]. Not only do the Jesuits support the contributions of lay partners in realizing the Society’s mission, but more importantly, they also recognize the powerful role that the lay partners in higher education play in the growth and revitalization of the congregation itself in the present time [7]. In an article in Conversations on Jesuit Higher Education, Fr. Howell writes: In a span of 50 years the Society of Jesus has been refounded. It is thriving. But it is thriving in a totally new and creative way. Its commitment to scholarship, for instance, is one of the strongest it has ever been, but carried out primarily through lay colleagues within the Jesuit university setting. Being a Lay Partner in Jesuit Higher Education in the Philippines: A Grounded Theory Application Janet B. Badong-Badilla T World Academy of Science, Engineering and Technology International Journal of Educational and Pedagogical Sciences",
"title": ""
},
{
"docid": "e04dda55d05d15e6a2fb3680a603bd43",
"text": "Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should be multi-modal, resulting in one-to-many mappings. By using stochastic hidden variables rather than deterministic ones, Sigmoid Belief Nets (SBNs) can induce a rich multimodal distribution in the output space. However, previously proposed learning algorithms for SBNs are not efficient and unsuitable for modeling real-valued data. In this paper, we propose a stochastic feedforward network with hidden layers composed of both deterministic and stochastic variables. A new Generalized EM training procedure using importance sampling allows us to efficiently learn complicated conditional distributions. Our model achieves superior performance on synthetic and facial expressions datasets compared to conditional Restricted Boltzmann Machines and Mixture Density Networks. In addition, the latent features of our model improves classification and can learn to generate colorful textures of objects.",
"title": ""
},
{
"docid": "8452091115566adaad8a67154128dff8",
"text": "© The Ecological Society of America www.frontiersinecology.org T Millennium Ecosystem Assessment (MA) advanced a powerful vision for the future (MA 2005), and now it is time to deliver. The vision of the MA – and of the prescient ecologists and economists whose work formed its foundation – is a world in which people and institutions appreciate natural systems as vital assets, recognize the central roles these assets play in supporting human well-being, and routinely incorporate their material and intangible values into decision making. This vision is now beginning to take hold, fueled by innovations from around the world – from pioneering local leaders to government bureaucracies, and from traditional cultures to major corporations (eg a new experimental wing of Goldman Sachs; Daily and Ellison 2002; Bhagwat and Rutte 2006; Kareiva and Marvier 2007; Ostrom et al. 2007; Goldman et al. 2008). China, for instance, is investing over 700 billion yuan (about US$102.6 billion) in ecosystem service payments, in the current decade (Liu et al. 2008). The goal of the Natural Capital Project – a partnership between Stanford University, The Nature Conservancy, and World Wildlife Fund (www.naturalcapitalproject.org) – is to help integrate ecosystem services into everyday decision making around the world. This requires turning the valuation of ecosystem services into effective policy and finance mechanisms – a problem that, as yet, no one has solved on a large scale. A key challenge remains: relative to other forms of capital, assets embodied in ecosystems are often poorly understood, rarely monitored, and are undergoing rapid degradation (Heal 2000a; MA 2005; Mäler et al. 2008). The importance of ecosystem services is often recognized only after they have been lost, as was the case following Hurricane Katrina (Chambers et al. 2007). Natural capital, and the ecosystem services that flow from it, are usually undervalued – by governments, businesses, and the public – if indeed they are considered at all (Daily et al. 2000; Balmford et al. 2002; NRC 2005). Two fundamental changes need to occur in order to replicate, scale up, and sustain the pioneering efforts that are currently underway, to give ecosystem services weight in decision making. First, the science of ecosystem services needs to advance rapidly. In promising a return (of services) on investments in nature, the scientific community needs to deliver the knowledge and tools necessary to forecast and quantify this return. To help address this challenge, the Natural Capital Project has developed InVEST (a system for Integrated Valuation of Ecosystem ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES",
"title": ""
},
{
"docid": "bcfc8566cf73ec7c002dcca671e3a0bd",
"text": "of the thoracic spine revealed a 1.1 cm intradural extramedullary mass at the level of the T2 vertebral body (Figure 1a). Spinal neurosurgery was planned due to exacerbation of her chronic back pain and progressive weakness of the lower limbs at 28 weeks ’ gestation. Emergent spinal decompression surgery was performed with gross total excision of the tumour. Doppler fl ow of the umbilical artery was used preoperatively and postoperatively to monitor fetal wellbeing. Th e histological examination revealed HPC, World Health Organization (WHO) grade 2 (Figure 1b). Complete recovery was seen within 1 week of surgery. Follow-up MRI demonstrated complete removal of the tumour. We recommended adjuvant external radiotherapy to the patient in the 3rd trimester of pregnancy due to HPC ’ s high risk of recurrence. However, the patient declined radiotherapy. Routine weekly obstetric assessments were performed following surgery. At the 37th gestational week, a 2,850 g, Apgar score 7 – 8, healthy infant was delivered by caesarean section, without need of admission to the neonatal intensive care unit. Adjuvant radiotherapy was administered to the patient in the postpartum period.",
"title": ""
},
{
"docid": "cd67a650969aa547cad8e825511c45c2",
"text": "We present DAPIP, a Programming-By-Example system that learns to program with APIs to perform data transformation tasks. We design a domainspecific language (DSL) that allows for arbitrary concatenations of API outputs and constant strings. The DSL consists of three family of APIs: regular expression-based APIs, lookup APIs, and transformation APIs. We then present a novel neural synthesis algorithm to search for programs in the DSL that are consistent with a given set of examples. The search algorithm uses recently introduced neural architectures to encode input-output examples and to model the program search in the DSL. We show that synthesis algorithm outperforms baseline methods for synthesizing programs on both synthetic and real-world benchmarks.",
"title": ""
},
{
"docid": "c0e4aa45a961aa69bc5c52e7cf7c889d",
"text": "CRM gains increasing importance due to intensive competition and saturated markets. With the purpose of retaining customers, academics as well as practitioners find it crucial to build a churn prediction model that is as accurate as possible. This study applies support vector machines in a newspaper subscription context in order to construct a churn model with a higher predictive performance. Moreover, a comparison is made between two parameter-selection techniques, needed to implement support vector machines. Both techniques are based on grid search and cross-validation. Afterwards, the predictive performance of both kinds of support vector machine models is benchmarked to logistic regression and random forests. Our study shows that support vector machines show good generalization performance when applied to noisy marketing data. Nevertheless, the parameter optimization procedure plays an important role in the predictive performance. We show that only when the optimal parameter selection procedure is applied, support vector machines outperform traditional logistic regression, whereas random forests outperform both kinds of support vector machines. As a substantive contribution, an overview of the most important churn drivers is given. Unlike ample research, monetary value and frequency do not play an important role in explaining churn in this subscription-services application. Even though most important churn predictors belong to the category of variables describing the subscription, the influence of several client/company-interaction variables can not be neglected.",
"title": ""
},
{
"docid": "864c2987092ca266b97ed11faec42aa3",
"text": "BACKGROUND\nAnxiety is the most common emotional response in women during delivery, which can be accompanied with adverse effects on fetus and mother.\n\n\nOBJECTIVES\nThis study was conducted to compare the effects of aromatherapy with rose oil and warm foot bath on anxiety in the active phase of labor in nulliparous women in Tehran, Iran.\n\n\nPATIENTS AND METHODS\nThis clinical trial study was performed after obtaining informed written consent on 120 primigravida women randomly assigned into three groups. The experimental group 1 received a 10-minute inhalation and footbath with oil rose. The experimental group 2 received a 10-minute warm water footbath. Both interventions were applied at the onset of active and transitional phases. Control group, received routine care in labor. Anxiety was assessed using visual analogous scale (VASA) at onset of active and transitional phases before and after the intervention. Statistical comparison was performed using SPSS software version 16 and P < 0.05 was considered significant.\n\n\nRESULTS\nAnxiety scores in the intervention groups in active phase after intervention were significantly lower than the control group (P < 0.001). Anxiety scores before and after intervention in intervention groups in transitional phase was significantly lower than the control group (P < 0.001).\n\n\nCONCLUSIONS\nUsing aromatherapy and footbath reduces anxiety in active phase in nulliparous women.",
"title": ""
},
{
"docid": "6a763e49cdfd41b28922eb536d9404ed",
"text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"title": ""
},
{
"docid": "785ca963ea1f9715cdea9baede4c6081",
"text": "In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors, each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.",
"title": ""
},
{
"docid": "555f06011d03cbe8dedb2fcd198540e9",
"text": "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve highquality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"title": ""
},
{
"docid": "ba89a62ac2d1b36738e521d4c5664de2",
"text": "Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.",
"title": ""
},
{
"docid": "c460660e6ea1cc38f4864fe4696d3a07",
"text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.",
"title": ""
}
] |
scidocsrr
|
0b590d5f3bc41286db3de0ab3bf48308
|
Neural Models for Key Phrase Extraction and Question Generation
|
[
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
}
] |
[
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "5cd3abebf4d990bb9196b7019b29c568",
"text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.",
"title": ""
},
{
"docid": "3f96a3cd2e3f795072567a3f3c8ccc46",
"text": "Good corporate reputations are critical because of their potential for value creation, but also because their intangible character makes replication by competing firms considerably more difficult. Existing empirical research confirms that there is a positive relationship between reputation and financial performance. This paper complements these findings by showing that firms with relatively good reputations are better able to sustain superior profit outcomes over time. In particular, we undertake an analysis of the relationship between corporate reputation and the dynamics of financial performance using two complementary dynamic models. We also decompose overall reputation into a component that is predicted by previous financial performance, and that which is ‘left over’, and find that each (orthogonal) element supports the persistence of above-average profits over time. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "e9b5dc63f981cc101521d8bbda1847d5",
"text": "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used by the state-of-the-art methods, like CycleGAN (Zhu et al., 2017), to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-toend learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.",
"title": ""
},
{
"docid": "288845120cdf96a20850b3806be3d89a",
"text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.",
"title": ""
},
{
"docid": "46ac5e994ca0bf0c3ea5dd110810b682",
"text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved",
"title": ""
},
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
},
{
"docid": "37572963400c8a78cef3cd4a565b328e",
"text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.",
"title": ""
},
{
"docid": "9d37260c493c40523c268f6e54c8b4ea",
"text": "Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea.",
"title": ""
},
{
"docid": "6604a90f21796895300d37cefed5b6fa",
"text": "Distributed power system network is going to be complex, and it will require high-speed, reliable and secure communication systems for managing intermittent generation with coordination of centralised power generation, including load control. Cognitive Radio (CR) is highly favourable for providing communications in Smart Grid by using spectrum resources opportunistically. The IEEE 802.22 Wireless Regional Area Network (WRAN) having the capabilities of CR use vacant channels opportunistically in the frequency range of 54 MHz to 862 MHz occupied by TV band. A comprehensive review of using IEEE 802.22 for Field Area Network in power system network using spectrum sensing (CR based communication) is provided in this paper. The spectrum sensing technique(s) at Base Station (BS) and Customer Premises Equipment (CPE) for detecting the presence of incumbent in order to mitigate interferences is also studied. The availability of backup and candidate channels are updated during “Quite Period” for further use (spectrum switching and management) with geolocation capabilities. The use of IEEE 802.22 for (a) radio-scene analysis, (b) channel identification, and (c) dynamic spectrum management are examined for applications in power management.",
"title": ""
},
{
"docid": "e8403145a3d4a8a75348075410683e28",
"text": "This paper presents a current-reuse complementary-input (CRCI) telescopic-cascode chopper stabilized amplifier with low-noise low-power operation. The current-reuse complementary-input strategy doubles the amplifier's effective transconductance by full current-reuse between complementary inputs, which significantly improves the noise-power efficiency. A pseudo-resistor based integrator is used in the DC servo loop to generate a high-pass cutoff below 1 Hz. The proposed amplifier features a mid-band gain of 39.25 dB, bandwidth from 0.12 Hz to 7.6 kHz, and draws 2.57 μA from a 1.2-V supply and exhibits an input-referred noise of 3.57 μVrms integrated from 100 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 2.5. The amplifier is designed in 0.13 μm 8-metal CMOS process.",
"title": ""
},
{
"docid": "6c92652aa5bab1b25910d16cca697d48",
"text": "Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems. Keywords— Shallow network, Deep networks, Intrusion detection, False positive alarm rates and True positive alarm rates 1.0 INTRODUCTION Computer networks have developed rapidly over the years contributing significantly to social and economic development. International trade, healthcare systems and military capabilities are examples of human activity that increasingly rely on networks. This has led to an increasing interest in the security of networks by industry and researchers. The importance of Intrusion Detection Systems (IDS) is critical as networks can become vulnerable to attacks from both internal and external intruders [1], [2]. An IDS is a detection system put in place to monitor computer networks. These have been in use since the 1980’s [3]. By analysing patterns of captured data from a network, IDS help to detect threats [4]. These threats can be devastating, for example, Denial of service (DoS) denies or prevents legitimate users resource on a network by introducing unwanted traffic [5]. Malware is another example, where attackers use malicious software to disrupt systems [6].",
"title": ""
},
{
"docid": "27401a6fe6a1edb5ba116db4bbdc7bcc",
"text": "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://apc.cs.princeton.edu/",
"title": ""
},
{
"docid": "8e64738b0d21db1ec5ef0220507f3130",
"text": "Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"title": ""
},
{
"docid": "e82e44e851486b557948a63366486fef",
"text": "v Combinatorial and algorithmic aspects of identifying codes in graphs Abstract: An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs. An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs.",
"title": ""
},
{
"docid": "bef317c450503a7f2c2147168b3dd51e",
"text": "With the development of the Internet of Things (IoT) and the usage of low-powered devices (sensors and effectors), a large number of people are using IoT systems in their homes and businesses to have more control over their technology. However, a key challenge of IoT systems is data protection in case the IoT device is lost, stolen, or used by one of the owner's friends or family members. The problem studied here is how to protect the access to data of an IoT system. To solve the problem, an attribute-based access control (ABAC) mechanism is applied to give the system the ability to apply policies to detect any unauthorized entry. Finally, a prototype was built to test the proposed solution. The evaluation plan was applied on the proposed solution to test the performance of the system.",
"title": ""
},
{
"docid": "3d2e82a0353d0b2803a579c413403338",
"text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi",
"title": ""
},
{
"docid": "c3e2ceebd3868dd9fff2a87fdd339dce",
"text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
},
{
"docid": "20e5855c2bab00b7f91cca5d7bd07245",
"text": "The increase in the number and complexity of biological databases has raised the need for modern and powerful data analysis tools and techniques. In order to fulfill these requirements, the machine learning discipline has become an everyday tool in bio-laboratories. The use of machine learning techniques has been extended to a wide spectrum of bioinformatics applications. It is broadly used to investigate the underlying mechanisms and interactions between biological molecules in many diseases, and it is an essential tool in any biomarker discovery process. In this chapter, we provide a basic taxonomy of machine learning algorithms, and the characteristics of main data preprocessing, supervised classification, and clustering techniques are shown. Feature selection, classifier evaluation, and two supervised classification topics that have a deep impact on current bioinformatics are presented. We make the interested reader aware of a set of popular web resources, open source software tools, and benchmarking data repositories that are frequently used by the machine",
"title": ""
}
] |
scidocsrr
|
c1c84ea618835e7592aedf1fdf0bb1c2
|
Improving the Reproducibility of PAN's Shared Tasks: - Plagiarism Detection, Author Identification, and Author Profiling
|
[
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "515e4ae8fabe93495d8072fe984d8bb6",
"text": "Most studies in statistical or machine learning based authorship attribution focus on two or a few authors. This leads to an overestimation of the importance of the features extracted from the training data and found to be discriminating for these small sets of authors. Most studies also use sizes of training data that are unrealistic for situations in which stylometry is applied (e.g., forensics), and thereby overestimate the accuracy of their approach in these situations. A more realistic interpretation of the task is as an authorship verification problem that we approximate by pooling data from many different authors as negative examples. In this paper, we show, on the basis of a new corpus with 145 authors, what the effect is of many authors on feature selection and learning, and show robustness of a memory-based learning approach in doing authorship attribution and verification with many authors and limited training data when compared to eager learning methods such as SVMs and maximum entropy learning.",
"title": ""
}
] |
[
{
"docid": "503277b20b3fd087df5c91c1a7c7a173",
"text": "Among vertebrates, only microchiropteran bats, cetaceans and some rodents are known to produce and detect ultrasounds (frequencies greater than 20 kHz) for the purpose of communication and/or echolocation, suggesting that this capacity might be restricted to mammals. Amphibians, reptiles and most birds generally have limited hearing capacity, with the ability to detect and produce sounds below ∼12 kHz. Here we report evidence of ultrasonic communication in an amphibian, the concave-eared torrent frog (Amolops tormotus) from Huangshan Hot Springs, China. Males of A. tormotus produce diverse bird-like melodic calls with pronounced frequency modulations that often contain spectral energy in the ultrasonic range. To determine whether A. tormotus communicates using ultrasound to avoid masking by the wideband background noise of local fast-flowing streams, or whether the ultrasound is simply a by-product of the sound-production mechanism, we conducted acoustic playback experiments in the frogs' natural habitat. We found that the audible as well as the ultrasonic components of an A. tormotus call can evoke male vocal responses. Electrophysiological recordings from the auditory midbrain confirmed the ultrasonic hearing capacity of these frogs and that of a sympatric species facing similar environmental constraints. This extraordinary upward extension into the ultrasonic range of both the harmonic content of the advertisement calls and the frog's hearing sensitivity is likely to have co-evolved in response to the intense, predominantly low-frequency ambient noise from local streams. Because amphibians are a distinct evolutionary lineage from microchiropterans and cetaceans (which have evolved ultrasonic hearing to minimize congestion in the frequency bands used for sound communication and to increase hunting efficacy in darkness), ultrasonic perception in these animals represents a new example of independent evolution.",
"title": ""
},
{
"docid": "7458ca6334cf5f02c6a30466cd8de2ce",
"text": "BACKGROUND\nFecal incontinence (FI) in children is frequently encountered in pediatric practice, and often occurs in combination with urinary incontinence. In most cases, FI is constipation-associated, but in 20% of children presenting with FI, no constipation or other underlying cause can be found - these children suffer from functional nonretentive fecal incontinence (FNRFI).\n\n\nOBJECTIVE\nTo summarize the evidence-based recommendations of the International Children's Continence Society for the evaluation and management of children with FNRFI.\n\n\nRECOMMENDATIONS\nFunctional nonretentive fecal incontinence is a clinical diagnosis based on medical history and physical examination. Except for determining colonic transit time, additional investigations are seldom indicated in the workup of FNRFI. Treatment should consist of education, a nonaccusatory approach, and a toileting program encompassing a daily bowel diary and a reward system. Special attention should be paid to psychosocial or behavioral problems, since these frequently occur in affected children. Functional nonretentive fecal incontinence is often difficult to treat, requiring prolonged therapies with incremental improvement on treatment and frequent relapses.",
"title": ""
},
{
"docid": "7087355045b28921ebc63296780415d9",
"text": "The Indian regional navigational satellite system (IRNSS) developed by the Indian Space Research Organization (ISRO) is an autonomous regional satellite navigation system which is under the complete control of Government of India. The requirement of indigenous regional navigational satellite system is driven by the fact that access to Global Navigation Satellite System, like GPS is not guaranteed in hostile situations. Design of IRNSS antenna at user segment is mandatory for Indian region. The IRNSS satellites will be placed at a higher geostationary orbit to have a larger signal footprint and minimum satellites for regional mapping. IRNSS signals will consist of a Special Positioning Service and a Precision Service. Both will be carried on L5 band (1176.45 MHz) and S band (2492.08 MHz). As it is be long range communication system needs high frequency signals and high gain receiving antennas. So, different antennas can be designed to enhance the gain and directivity. Based on this the rectangular Microstrip patch antenna, planar array of patch antennas and planar, wideband feed slot spiral antenna are designed by using various software simulations. Use of array of spiral antennas will increase the gain position. Spiral antennas are comparatively small size and these antennas with its windings making it an extremely small structure. The performance of the designed antennas was compared in terms of return loss, bandwidth, directivity, radiation pattern and gain. In this paper, Review results of all antennas designed for IRNSS have presented.",
"title": ""
},
{
"docid": "f6d87c501bae68fe1b788e5b01bd17cc",
"text": "The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical non-linear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive second-order models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for large-scale problems and compares favorable with the state-of-the-art, while outperforming most existing solvers.",
"title": ""
},
{
"docid": "f5360ff8d8cc5d0a852cebeb09a29a98",
"text": "In this paper, we propose a collaborative deep reinforcement learning (C-DRL) method for multi-object tracking. Most existing multiobject tracking methods employ the tracking-by-detection strategy which first detects objects in each frame and then associates them across different frames. However, the performance of these methods rely heavily on the detection results, which are usually unsatisfied in many real applications, especially in crowded scenes. To address this, we develop a deep prediction-decision network in our C-DRL, which simultaneously detects and predicts objects under a unified network via deep reinforcement learning. Specifically, we consider each object as an agent and track it via the prediction network, and seek the optimal tracked results by exploiting the collaborative interactions of different agents and environments via the decision network.Experimental results on the challenging MOT15 and MOT16 benchmarks are presented to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "a7e3338d682278643fdd7eefa795f3f3",
"text": "State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. However, they still lack generalization capabilities in conditions that differ from the ones encountered during training. This is even more challenging in specialized, and knowledge intensive domains, where training data is limited. To address this gap, we introduce MedNLI1 – a dataset annotated by doctors, performing a natural language inference task (NLI), grounded in the medical history of patients. We present strategies to: 1) leverage transfer learning using datasets from the open domain, (e.g. SNLI) and 2) incorporate domain knowledge from external data and lexical sources (e.g. medical terminologies). Our results demonstrate performance gains using both strategies.",
"title": ""
},
{
"docid": "e584e7e0c96bc78bc2b2166d1af272a6",
"text": "In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \"projective generative adversarial networks\" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views. The addition of a projection module allows us to infer the underlying 3D shape distribution without using any 3D, viewpoint information, or annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained on 3D data for a number of shape categories including chairs, airplanes, and cars. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage is that our model allows us to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.",
"title": ""
},
{
"docid": "fff6c1ca2fde7f50c3654f1953eb97e6",
"text": "This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.",
"title": ""
},
{
"docid": "1bc285b8bd63e701a55cf956179abbac",
"text": "A new anode/cathode design and process concept for thin wafer based silicon devices is proposed to achieve the goal of providing improved control for activating the injecting layer and forming a good ohmic contact. The concept is based on laser annealing in a melting regime of a p-type anode layer covered with a thin titanium layer with high melting temperature and high laser light absorption. The improved activation control of a boron anode layer is demonstrated on the Soft Punch Through IGBT with a nominal breakdown voltage of 1700 V. Furthermore, the silicidation of the titanium absorbing layer, which is necessary for achieving a low VCE ON, is discussed in terms of optimization of the device electrical parameters.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "c182be9222690ffe1c94729b2b79d8ed",
"text": "A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.",
"title": ""
},
{
"docid": "a01abbced99f14ae198c6abef6454126",
"text": "Coreference Resolution September 2014 Present Kevin Clark, Christopher Manning Stanford University Developed coreference systems that build up coreference chains with agglomerative clustering. These models are more accurate than the mention-pair systems commonly used in prior work. Developed neural coreference systems that do not require the large number of complex hand-engineered features commonly found in statistical coreference systems. Applied imitation and reinforcement learning to directly optimize coreference systems for evaluation metrics instead of relying on hand-tuned heuristic loss functions. Made substantial advancements to the current state-of-the-art for English and Chinese coreference. Publicly released all models through Stanford’s CoreNLP.",
"title": ""
},
{
"docid": "4ec91fd15f10c1c8616a890447c2b063",
"text": "Texture is an important visual clue for various classification and segmentation tasks in the scene understanding challenge. Today, successful deployment of deep learning algorithms for texture recognition leads to tremendous precisions on standard datasets. In this paper, we propose a new learning framework to train deep neural networks in parallel and with variable depth for texture recognition. Our framework learns scales, orientations and resolutions of texture filter banks. Due to the learning of parameters not the filters themselves, computational costs are highly reduced. It is also capable of extracting very deep features through distributed computing architectures. Our experiments on publicly available texture datasets show significant improvements in the recognition performance over other deep local descriptors in recently published benchmarks.",
"title": ""
},
{
"docid": "a79f9ad24c4f047d8ace297b681ccf0a",
"text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.",
"title": ""
},
{
"docid": "e6610d23c69a140fdf07d1ee2e58c8a1",
"text": "Purpose – The purpose of this paper is to contribute to the body of knowledge about to what extent integrated information systems, such as ERP and SEM systems, affect the ability to solve different management accounting tasks. Design/methodology/approach – The relationship between IIS and management accounting practices was investigated quantitatively. A total of 349 responses were collected using a survey, and the data were analysed using linear regression models. Findings – Analyses indicate that ERP systems support the data collection and the organisational breadth of management accounting better than SEM systems. SEM systems, on the other hand, seem to be better at supporting reporting and analysis. In addition, modern management accounting techniques involving the use of non-financial data are better supported by an SEM system. This indicates that different management accounting tasks are supported by different parts of the IIS. Research limitations/implications – The study applies the methods of quantitative research. Thus, the internal validity is threatened. Conducting in-depth studies might be able to reduce this possible shortcoming. Practical implications – On the basis of the findings, there is a need to consider the potential of closer integration of ERP and SEM systems in order to solve management accounting tasks. Originality/value – This paper adds to the limited body of knowledge about the relationship between IIS and management accounting practices.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
},
{
"docid": "375ab5445e81c7982802bdb8b9cbd717",
"text": "Advances in healthcare have led to longer life expectancy and an aging population. The cost of caring for the elderly is rising progressively and threatens the economic well-being of many nations around the world. Instead of professional nursing facilities, many elderly people prefer living independently in their own homes. To enable the aging to remain active, this research explores the roles of technology in improving their quality of life while reducing the cost of healthcare to the elderly population. In particular, we propose a multi-agent service framework, called Context-Aware Service Integration System (CASIS), to integrate applications and services. This paper demonstrates several context-aware service scenarios these have been developed on the proposed framework to demonstrate how context technologies and mobile web services can help enhance the quality of care for an elder’s daily",
"title": ""
},
{
"docid": "e9e620742992a6b6aa50e6e0e5894b6f",
"text": "A significant amount of information in today’s world is stored in structured and semistructured knowledge bases. Efficient and simple methods to query these databases are essential and must not be restricted to only those who have expertise in formal query languages. The field of semantic parsing deals with converting natural language utterances to logical forms that can be easily executed on a knowledge base. In this survey, we examine the various components of a semantic parsing system and discuss prominent work ranging from the initial rule based methods to the current neural approaches to program synthesis. We also discuss methods that operate using varying levels of supervision and highlight the key challenges involved in the learning of such systems.",
"title": ""
},
{
"docid": "0b973f37e2d9c3d7f427b939db233f12",
"text": "Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. However, the central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles of such models they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. This is beneficial, e.g. for general understanding, for teaching, for learning, for research, and it can be helpful in court. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.",
"title": ""
}
] |
scidocsrr
|
d211f8d25ed48575a3f39ca00c42ea4c
|
Managing Non-Volatile Memory in Database Systems
|
[
{
"docid": "149b1f7861d55e90b1f423ff98e765ca",
"text": "The advent of Storage Class Memory (SCM) is driving a rethink of storage systems towards a single-level architecture where memory and storage are merged. In this context, several works have investigated how to design persistent trees in SCM as a fundamental building block for these novel systems. However, these trees are significantly slower than DRAM-based counterparts since trees are latency-sensitive and SCM exhibits higher latencies than DRAM. In this paper we propose a novel hybrid SCM-DRAM persistent and concurrent B-Tree, named Fingerprinting Persistent Tree (FPTree) that achieves similar performance to DRAM-based counterparts. In this novel design, leaf nodes are persisted in SCM while inner nodes are placed in DRAM and rebuilt upon recovery. The FPTree uses Fingerprinting, a technique that limits the expected number of in-leaf probed keys to one. In addition, we propose a hybrid concurrency scheme for the FPTree that is partially based on Hardware Transactional Memory. We conduct a thorough performance evaluation and show that the FPTree outperforms state-of-the-art persistent trees with different SCM latencies by up to a factor of 8.2. Moreover, we show that the FPTree scales very well on a machine with 88 logical cores. Finally, we integrate the evaluated trees in memcached and a prototype database. We show that the FPTree incurs an almost negligible performance overhead over using fully transient data structures, while significantly outperforming other persistent trees.",
"title": ""
}
] |
[
{
"docid": "20436a21b4105700d7e95a477a22d830",
"text": "We introduce a new type of Augmented Reality games: By using a simple webcam and Computer Vision techniques, we turn a standard real game board pawns into an AR game. We use these objects as a tangible interface, and augment them with visual effects. The game logic can be performed automatically by the computer. This results in a better immersion compared to the original board game alone and provides a different experience than a video game. We demonstrate our approach on Monopoly− [1], but it is very generic and could easily be adapted to any other board game.",
"title": ""
},
{
"docid": "467bb4ffb877b4e21ad4f7fc7adbd4a6",
"text": "In this paper, a 6 × 6 planar slot array based on a hollow substrate integrated waveguide (HSIW) is presented. To eliminate the tilting of the main beam, the slot array is fed from the centre at the back of the HSIW, which results in a blockage area. To reduce the impact on sidelobe levels, a slot extrusion technique is introduced. A simplified multiway power divider is demonstrated to feed the array elements and the optimisation procedure is described. To verify the antenna design, a 6 × 6 planar array is fabricated and measured in a low temperature co-fired ceramic (LTCC) technology. The HSIW has lower loss, comparable to standard WR28, and a high gain of 17.1 dBi at 35.5 GHz has been achieved in the HSIW slot array.",
"title": ""
},
{
"docid": "572453e5febc5d45be984d7adb5436c5",
"text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.",
"title": ""
},
{
"docid": "539fb99a52838d6ce6f980b9b9703a2b",
"text": "The Blinder-Oaxaca decomposition technique is widely used to identify and quantify the separate contributions of differences in measurable characteristics to group differences in an outcome of interest. The use of a linear probability model and the standard BlinderOaxaca decomposition, however, can provide misleading estimates when the dependent variable is binary, especially when group differences are very large for an influential explanatory variable. A simulation method of performing a nonlinear decomposition that uses estimates from a logit, probit or other nonlinear model was first developed in a Journal of Labor Economics article (Fairlie 1999). This nonlinear decomposition technique has been used in nearly a thousand subsequent studies published in a wide range of fields and disciplines. In this paper, I address concerns over path dependence in using the nonlinear decomposition technique. I also present a straightforward method of incorporating sample weights in the technique. I thank Eric Aldrich and Ben Jann for comments and suggestions, and Brandon Heck for research assistance.",
"title": ""
},
{
"docid": "590e0965ca61223d5fefb82e89f24fd0",
"text": "Large software projects contain significant code duplication, mainly due to copying and pasting code. Many techniques have been developed to identify duplicated code to enable applications such as refactoring, detecting bugs, and protecting intellectual property. Because source code is often unavailable, especially for third-party software, finding duplicated code in binaries becomes particularly important. However, existing techniques operate primarily on source code, and no effective tool exists for binaries.\n In this paper, we describe the first practical clone detection algorithm for binary executables. Our algorithm extends an existing tree similarity framework based on clustering of characteristic vectors of labeled trees with novel techniques to normalize assembly instructions and to accurately and compactly model their structural information. We have implemented our technique and evaluated it on Windows XP system binaries totaling over 50 million assembly instructions. Results show that it is both scalable and precise: it analyzed Windows XP system binaries in a few hours and produced few false positives. We believe our technique is a practical, enabling technology for many applications dealing with binary code.",
"title": ""
},
{
"docid": "a4a15096e116a6afc2730d1693b1c34f",
"text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.",
"title": ""
},
{
"docid": "82234158dc94216222efa5f80eee0360",
"text": "We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our technique yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).",
"title": ""
},
{
"docid": "d0985c38f3441ca0d69af8afaf67c998",
"text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.",
"title": ""
},
{
"docid": "96c1f90ff04e7fd37d8b8a16bc4b9c54",
"text": "Graph triangulation, which finds all triangles in a graph, has been actively studied due to its wide range of applications in the network analysis and data mining. With the rapid growth of graph data size, disk-based triangulation methods are in demand but little researched. To handle a large-scale graph which does not fit in memory, we must iteratively load small parts of the graph. In the existing literature, achieving the ideal cost has been considered to be impossible for billion-scale graphs due to the memory size constraint. In this paper, we propose an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/O. In OPT, triangles in memory are called the internal triangles while triangles constituting vertices in memory and vertices in external memory are called the external triangles. At the macro level, OPT overlaps the internal triangulation and the external triangulation, while it overlaps the CPU and I/O operations at the micro level. Thereby, the cost of OPT is close to the ideal cost. Moreover, OPT instantiates both vertex-iterator and edge-iterator models and benefits from multi-thread parallelism on both types of triangulation. Extensive experiments conducted on large-scale datasets showed that (1) OPT achieved the elapsed time close to that of the ideal method with less than 7% of overhead under the limited memory budget, (2) OPT achieved linear speed-up with an increasing number of CPU cores, (3) OPT outperforms the state-of-the-art parallel method by up to an order of magnitude with 6 CPU cores, and (4) for the first time in the literature, the triangulation results are reported for a billion-vertex scale real-world graph.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
},
{
"docid": "1ee444fda98b312b0462786f5420f359",
"text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.",
"title": ""
},
{
"docid": "da9432171ceba5ae76fa76a8416b1a8f",
"text": "Social tagging on online portals has become a trend now. It has emerged as one of the best ways of associating metadata with web objects. With the increase in the kinds of web objects becoming available, collaborative tagging of such objects is also developing along new dimensions. This popularity has led to a vast literature on social tagging. In this survey paper, we would like to summarize different techniques employed to study various aspects of tagging. Broadly, we would discuss about properties of tag streams, tagging models, tag semantics, generating recommendations using tags, visualizations of tags, applications of tags and problems associated with tagging usage. We would discuss topics like why people tag, what influences the choice of tags, how to model the tagging process, kinds of tags, different power laws observed in tagging domain, how tags are created, how to choose the right tags for recommendation, etc. We conclude with thoughts on future work in the area.",
"title": ""
},
{
"docid": "318aa0dab44cca5919100033aa692cd9",
"text": "Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities.",
"title": ""
},
{
"docid": "709853992cae8d5b5fa4c3cc86d898a7",
"text": "The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.",
"title": ""
},
{
"docid": "c5f521d5e5e089261914f6784e2d77da",
"text": "Generating structured query language (SQL) from natural language is an emerging research topic. This paper presents a new learning paradigm from indirect supervision of the answers to natural language questions, instead of SQL queries. This paradigm facilitates the acquisition of training data due to the abundant resources of question-answer pairs for various domains in the Internet, and expels the difficult SQL annotation job. An endto-end neural model integrating with reinforcement learning is proposed to learn SQL generation policy within the answerdriven learning paradigm. The model is evaluated on datasets of different domains, including movie and academic publication. Experimental results show that our model outperforms the baseline models.",
"title": ""
},
{
"docid": "0ccfbd8f2b8979ec049d94fa6dddf614",
"text": "Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular projectbased instruction. No significant differences were found between the two groups with respect to motivation for History or the MiddleAges. The impact of location-based technology and gamebased learning on pupil knowledge and motivation are discussed along with suggestions for future research.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "50875a63d0f3e1796148d809b5673081",
"text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.",
"title": ""
}
] |
scidocsrr
|
3387d0ddea6ff80834f71a31b8234ee0
|
The Scyther Tool: Verification, Falsification, and Analysis of Security Protocols
|
[
{
"docid": "7d634a9abe92990de8cb41a78c25d2cc",
"text": "We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests.",
"title": ""
}
] |
[
{
"docid": "8d0221daae5933760698b8f4f7943870",
"text": "We introduce a novel, online method to predict pedestrian trajectories using agent-based velocity-space reasoning for improved human-robot interaction and collision-free navigation. Our formulation uses velocity obstacles to model the trajectory of each moving pedestrian in a robot’s environment and improves the motion model by adaptively learning relevant parameters based on sensor data. The resulting motion model for each agent is computed using statistical inferencing techniques, including a combination of Ensemble Kalman filters and a maximum-likelihood estimation algorithm. This allows a robot to learn individual motion parameters for every agent in the scene at interactive rates. We highlight the performance of our motion prediction method in real-world crowded scenarios, compare its performance with prior techniques, and demonstrate the improved accuracy of the predicted trajectories. We also adapt our approach for collision-free robot navigation among pedestrians based on noisy data and highlight the results in our simulator.",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "605201e9b3401149da7e0e22fdbc908b",
"text": "Roadway traffic safety is a major concern for transportation governing agencies as well as ordinary citizens. In order to give safe driving suggestions, careful analysis of roadway traffic data is critical to find out variables that are closely related to fatal accidents. In this paper we apply statistics analysis and data mining algorithms on the FARS Fatal Accident dataset as an attempt to address this problem. The relationship between fatal rate and other attributes including collision manner, weather, surface condition, light condition, and drunk driver were investigated. Association rules were discovered by Apriori algorithm, classification model was built by Naive Bayes classifier, and clusters were formed by simple K-means clustering algorithm. Certain safety driving suggestions were made based on statistics, association rules, classification model, and clusters obtained.",
"title": ""
},
{
"docid": "9d82ce8e6630a9432054ed97752c7ec6",
"text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.",
"title": ""
},
{
"docid": "d639f6b922e24aca7229ce561e852b31",
"text": "As digital video becomes more pervasive, e cient ways of searching and annotating video according to content will be increasingly important. Such tasks arise, for example, in the management of digital video libraries for content-based retrieval and browsing. In this paper, we develop tools based on camera motion for analyzing and annotating a class of structured video using the low-level information available directly from MPEG compressed video. In particular, we show that in certain structured settings it is possible to obtain reliable estimates of camera motion by directly processing data easily obtained from the MPEG format. Working directly with the compressed video greatly reduces the processing time and enhances storage e ciency. As an illustration of this idea, we have developed a simple basketball annotation system which combines the low-level information extracted from an MPEG stream with the prior knowledge of basketball structure to provide high level content analysis, annotation and browsing for events such as wide-angle and close-up views, fast breaks, probable shots at the basket, etc. The methods used in this example should also be useful in the analysis of high-level content of structured video in other domains.",
"title": ""
},
{
"docid": "60697a4e8dd7d13147482a0992ee1862",
"text": "Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. We propose a technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision.\n To demonstrate the effectiveness of type refinement, we implement a static analysis tool for reporting potential type-errors in JavaScript programs. We provide an extensive empirical evaluation of type refinement using a benchmark suite containing a variety of JavaScript application domains, ranging from the standard performance benchmark suites (Sunspider and Octane), to open-source JavaScript applications, to machine-generated JavaScript via Emscripten. We show that type refinement can significantly improve analysis precision by up to 86% without affecting the performance of the analysis.",
"title": ""
},
{
"docid": "9489210bfc8884d8290f772996629095",
"text": "Semantic interaction techniques in visual data analytics allow users to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using an underlying bidirectional pipeline, using a series of statistical models, and performing inverse computations to transform user interactions into model updates. We propose a visual analytics pipeline that captures these necessary features of semantic interactions. Our flexible, multi-model, bidirectional pipeline has modular functionality to enable rapid prototyping. This enables quick alterations to the type of data being visualized, models for transforming the data, semantic interaction methods, and visual encodings. To demonstrate how this pipeline can be used, we developed a series of applications that employ semantic interactions. We also discuss how the pipeline can be used or extended for future research on semantic interactions in visual analytics.",
"title": ""
},
{
"docid": "ac86e950866646a0b86d76bb3c087d0a",
"text": "In this paper, an SVM-based approach is proposed for stock market trend prediction. The proposed approach consists of two parts: feature selection and prediction model. In the feature selection part, a correlation-based SVM filter is applied to rank and select a good subset of financial indexes. And the stock indicators are evaluated based on the ranking. In the prediction model part, a so called quasi-linear SVM is applied to predict stock market movement direction in term of historical data series by using the selected subset of financial indexes as the weighted inputs. The quasi-linear SVM is an SVM with a composite quasi-linear kernel function, which approximates a nonlinear separating boundary by multi-local linear classifiers with interpolation. Experimental results on Taiwan stock market datasets demonstrate that the proposed SVM-based stock market trend prediction method produces better generalization performance over the conventional methods in terms of the hit ratio. Moreover, the experimental results also show that the proposed SVM-based stock market trend prediction system can find out a good subset and evaluate stock indicators which provide useful information for investors.",
"title": ""
},
{
"docid": "22e559b9536b375ded6516ceb93652ef",
"text": "In this paper we explore the linguistic components of toxic behavior by using crowdsourced data from over 590 thousand cases of accused toxic players in a popular match-based competition game, League of Legends. We perform a series of linguistic analyses to gain a deeper understanding of the role communication plays in the expression of toxic behavior. We characterize linguistic behavior of toxic players and compare it with that of typical players in an online competition game. We also find empirical support describing how a player transitions from typical to toxic behavior. Our findings can be helpful to automatically detect and warn players who may become toxic and thus insulate potential victims from toxic playing in advance.",
"title": ""
},
{
"docid": "5679a329a132125d697369ca4d39b93e",
"text": "This paper proposes a method to explore the design space of FinFETs with double fin heights. Our study shows that if one fin height is sufficiently larger than the other and the greatest common divisor of their equivalent transistor widths is small, the fin height pair will incur less width quantization effect and lead to better area efficiency. We design a standard cell library based on this technology using a tailored FreePDK15. With respect to a standard cell library designed with FreePDK15, about 86% of the cells designed with FinFETs of double fin heights have a smaller delay and 54% of the cells take a smaller area. We also demonstrate the advantages of FinFETs with double fin heights through chip designs using our cell library.",
"title": ""
},
{
"docid": "dca6d14c168f0836411df562444e71c5",
"text": "Obesity is a growing global health concern, with a rapid increase being observed in morbid obesity. Obesity is associated with an increased cardiovascular risk and earlier onset of cardiovascular morbidity. The growing obesity epidemic is a major source of unsustainable health costs and morbidity and mortality because of hypertension, type 2 diabetes mellitus, dyslipidemia, certain cancers and major cardiovascular diseases. Similar to obesity, hypertension is a key unfavorable health metric that has disastrous health implications: currently, hypertension is the leading contributor to global disease burden, and the direct and indirect costs of treating hypertension are exponentially higher. Poor lifestyle characteristics and health metrics often cluster together to create complex and difficult-to-treat phenotypes: excess body mass is such an example, facilitating a cascade of pathophysiological sequelae that create such as a direct obesity–hypertension link, which consequently increases cardiovascular risk. Although some significant issues regarding assessment/management of obesity remain to be addressed and the underlying mechanisms governing these disparate effects of obesity on cardiovascular disease are complex and not completely understood, a variety of factors could have a critical role. Consequently, a comprehensive and exhaustive investigation of this relationship should analyze the pathogenetic factors and pathophysiological mechanisms linking obesity to hypertension as they provide the basis for a rational therapeutic strategy in the aim to fully describe and understand the obesity–hypertension link and discuss strategies to address the potential negative consequences from the perspective of both primordial prevention and treatment for those already impacted by this condition.",
"title": ""
},
{
"docid": "be76c7f877ad43668fe411741478c43b",
"text": "With the surging of smartphone sensing, wireless networking, and mobile social networking techniques, Mobile Crowd Sensing and Computing (MCSC) has become a promising paradigm for cross-space and large-scale sensing. MCSC extends the vision of participatory sensing by leveraging both participatory sensory data from mobile devices (offline) and user-contributed data from mobile social networking services (online). Further, it explores the complementary roles and presents the fusion/collaboration of machine and human intelligence in the crowd sensing and computing processes. This article characterizes the unique features and novel application areas of MCSC and proposes a reference framework for building human-in-the-loop MCSC systems. We further clarify the complementary nature of human and machine intelligence and envision the potential of deep-fused human--machine systems. We conclude by discussing the limitations, open issues, and research opportunities of MCSC.",
"title": ""
},
{
"docid": "bd5e127cc3454bbf8a89c3f7d66fd624",
"text": "Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. Building on our prior work on anomaly detection, we investigate how to improve the anomaly detection approach to provide more details on attack types and sources. For several well-known attacks, we can apply a simple rule to identify the attack type when an anomaly is reported. In some cases, these rules can also help identify the attackers. We address the run-time resource constraint problem using a cluster-based detection scheme where periodically a node is elected as the ID agent for a cluster. Compared with the scheme where each node is its own ID agent, this scheme is much more efficient while maintaining the same level of effectiveness. We have conducted extensive experiments using the ns-2 and MobiEmu environments to validate our research.",
"title": ""
},
{
"docid": "1e8acf321f7ff3a1a496e4820364e2a8",
"text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.",
"title": ""
},
{
"docid": "147b207125fcda1dece25a6c5cd17318",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "d2c0e71db2957621eca42bdc221ffb8f",
"text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.",
"title": ""
},
{
"docid": "1c0eaeea7e1bfc777bb6e391eb190b59",
"text": "We review machine learning (ML)-based optical performance monitoring (OPM) techniques in optical communications. Recent applications of ML-assisted OPM in different aspects of fiber-optic networking including cognitive fault detection and management, network equipment failure prediction, and dynamic planning and optimization of software-defined networks are also discussed.",
"title": ""
},
{
"docid": "6c62e51d723d523fa286e94d3037a76f",
"text": "Stochastic programming can effectively describe many deci sion making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the ran dom parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discr ete, Gaussian, exponential, etc.) and moments (mean and cov ariance matrix). We demonstrate that for a wide range of cost fun ctio s the associated distributionally robust (or min-max ) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilis tic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a pra ctical example of portfolio selection, where our framework leads to better performing policies on the “true” distribut on underlying the daily returns of financial assets.",
"title": ""
},
{
"docid": "2da214ec8cd7e2380c0ee17adc3ad9fb",
"text": "Machine intelligence is an important problem to be solved for artificial intelligence to be truly impactful in our lives. While many question answering models have been explored for existing machine comprehension datasets, there has been little work with the newly released MS Marco dataset, which poses many unique challenges. We explore an end-to-end neural architecture with attention mechanisms capable of comprehending relevant information and generating text answers for MS Marco.",
"title": ""
},
{
"docid": "10fd41c0ff246545ceab663b9d9b3853",
"text": "Because structural equation modeling (SEM) has become a very popular data-analytic technique, it is important for clinical scientists to have a balanced perception of its strengths and limitations. We review several strengths of SEM, with a particular focus on recent innovations (e.g., latent growth modeling, multilevel SEM models, and approaches for dealing with missing data and with violations of normality assumptions) that underscore how SEM has become a broad data-analytic framework with flexible and unique capabilities. We also consider several limitations of SEM and some misconceptions that it tends to elicit. Major themes emphasized are the problem of omitted variables, the importance of lower-order model components, potential limitations of models judged to be well fitting, the inaccuracy of some commonly used rules of thumb, and the importance of study design. Throughout, we offer recommendations for the conduct of SEM analyses and the reporting of results.",
"title": ""
}
] |
scidocsrr
|
16db2a19ce63b6b189aa6980cdbb1208
|
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
|
[
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
}
] |
[
{
"docid": "09beeeaf2d92087da10c5725bda10d2f",
"text": "We report a quantitative investigation of the visual identification and auditory comprehension deficits of 4 patients who had made a partial recovery from herpes simplex encephalitis. Clinical observations had suggested the selective impairment and selective preservation of certain categories of visual stimuli. In all 4 patients a significant discrepancy between their ability to identify inanimate objects and inability to identify living things and foods was demonstrated. In 2 patients it was possible to compare visual and verbal modalities and the same pattern of dissociation was observed in both. For 1 patient, comprehension of abstract words was significantly superior to comprehension of concrete words. Consistency of responses was recorded within a modality in contrast to a much lesser degree of consistency between modalities. We interpret our findings in terms of category specificity in the organization of meaning systems that are also modality specific semantic systems.",
"title": ""
},
{
"docid": "66fc8ff7073579314c50832a6f06c10d",
"text": "Endodontic management of the permanent immature tooth continues to be a challenge for both clinicians and researchers. Clinical concerns are primarily related to achieving adequate levels of disinfection as 'aggressive' instrumentation is contraindicated and hence there exists a much greater reliance on endodontic irrigants and medicaments. The open apex has also presented obturation difficulties, notably in controlling length. Long-term apexification procedures with calcium hydroxide have proven to be successful in retaining many of these immature infected teeth but due to their thin dentinal walls and perceived problems associated with long-term placement of calcium hydroxide, they have been found to be prone to cervical fracture and subsequent tooth loss. In recent years there has developed an increasing interest in the possibility of 'regenerating' pulp tissue in an infected immature tooth. It is apparent that although the philosophy and hope of 'regeneration' is commendable, recent histologic studies appear to suggest that the calcified material deposited on the canal wall is bone/cementum rather than dentine, hence the absence of pulp tissue with or without an odontoblast layer.",
"title": ""
},
{
"docid": "eb83f7367ba11bb5582864a08bb746ff",
"text": "Probabilistic inference algorithms for find ing the most probable explanation, the max imum aposteriori hypothesis, and the maxi mum expected utility and for updating belief are reformulated as an elimination-type al gorithm called bucket elimination. This em phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition ing and elimination within this framework. Bounds on complexity are given for all the al gorithms as a function of the problem's struc ture.",
"title": ""
},
{
"docid": "48fc7aabdd36ada053ebc2d2a1c795ae",
"text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.",
"title": ""
},
{
"docid": "8cb3aed5fab2f5d54195b0e4c2a9a4c6",
"text": "This paper describes a tri-modal asymmetric bidirectional differential memory interface that supports data rates of up to 20 Gbps over 3\" FR4 PCB channels while achieving power efficiency of 6.1 mW/Gbps at full speed. The interface also accommodates single-ended standard DDR3 and GDDR5 signaling at 1.6-Gbps and 6.4-Gbps operations, respectively, without package change. The compact, low-power and high-speed tri-modal interface is enabled by substantial reuse of the circuit elements among various signaling modes, particularly in the wide-band clock generation and distribution system and the multi-modal driver output stage, as well as the use of fast equalization for post-cursor intersymbol interference (ISI) mitigation. In the high-speed differential mode, the system utilizes a 1-tap transmit equalizer during a WRITE operation to the memory. In contrast, during a memory READ operation, it employs a linear equalizer (LEQ) with 3 dB of peaking as well as a calibrated high-speed 1-tap predictive decision feedback equalizer (prDFE), while no transmitter equalization is assumed for the memory. The prototype tri-modal interface implemented in a 40-nm CMOS process, consists of 16 data links and achieves more than 2.5 × energy-efficient memory transactions at 16 Gbps compared to a previous single-mode generation.",
"title": ""
},
{
"docid": "9464f2e308b5c8ab1f2fac1c008042c0",
"text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.",
"title": ""
},
{
"docid": "dd0562e604e6db2c31132f1ffcd94d4f",
"text": "a r t i c l e i n f o Keywords: Data quality Utility Cost–benefit analysis Data warehouse CRM Managing data resources at high quality is usually viewed as axiomatic. However, we suggest that, since the process of improving data quality should attempt to maximize economic benefits as well, high data quality is not necessarily economically-optimal. We demonstrate this argument by evaluating a microeconomic model that links the handling of data quality defects, such as outdated data and missing values, to economic outcomes: utility, cost, and net-benefit. The evaluation is set in the context of Customer Relationship Management (CRM) and uses large samples from a real-world data resource used for managing alumni relations. Within this context, our evaluation shows that all model parameters can be measured, and that all model-related assumptions are, largely, well supported. The evaluation confirms the assumption that the optimal quality level, in terms of maximizing net-benefits, is not necessarily the highest possible. Further, the evaluation process contributes some important insights for revising current data acquisition and maintenance policies. Maintaining data resources at a high quality level is a critical task in managing organizational information systems (IS). Data quality (DQ) significantly affects IS adoption and the success of data utilization [10,26]. Data quality management (DQM) has been examined from a variety of technical, functional, and organizational perspectives [22]. Achieving high quality is the primary objective of DQM efforts, and much research in DQM focuses on methodologies, tools and techniques for improving quality. Recent studies (e.g., [14,19]) have suggested that high DQ, although having clear merits, should not necessarily be the only objective to consider when assessing DQM alternatives, particularly in an IS that manages large datasets. As shown in these studies, maximizing economic benefits, based on the value gained from improving quality, and the costs involved in improving quality, may conflict with the target of achieving a high data quality level. Such findings inspire the need to link DQM decisions to economic outcomes and tradeoffs, with the goal of identifying more cost-effective DQM solutions. The quality of organizational data is rarely perfect as data, when captured and stored, may suffer from such defects as inaccuracies and missing values [22]. Its quality may further deteriorate as the real-world items that the data describes may change over time (e.g., a customer changing address, profession, and/or marital status). A plethora of studies have underscored the negative effect of low …",
"title": ""
},
{
"docid": "bdae3fb85df9de789a9faa2c08a5c0fb",
"text": "The rapid, exponential growth of modern electronics has brought about profound changes to our daily lives. However, maintaining the growth trend now faces significant challenges at both the fundamental and practical levels [1]. Possible solutions include More Moore?developing new, alternative device structures and materials while maintaining the same basic computer architecture, and More Than Moore?enabling alternative computing architectures and hybrid integration to achieve increased system functionality without trying to push the devices beyond limits. In particular, an increasing number of computing tasks today are related to handling large amounts of data, e.g. image processing as an example. Conventional von Neumann digital computers, with separate memory and processer units, become less and less efficient when large amount of data have to be moved around and processed quickly. Alternative approaches such as bio-inspired neuromorphic circuits, with distributed computing and localized storage in networks, become attractive options [2]?[6].",
"title": ""
},
{
"docid": "7f54157faf8041436174fa865d0f54a8",
"text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning",
"title": ""
},
{
"docid": "7882d2d18bc8a30a63e9fdb726c48ff1",
"text": "Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.",
"title": ""
},
{
"docid": "f7a2f86526209860d7ea89d3e7f2b576",
"text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.",
"title": ""
},
{
"docid": "c1fa2b5da311edb241dca83edcf327a4",
"text": "The growing amount of web-based attacks poses a severe threat to the security of web applications. Signature-based detection techniques increasingly fail to cope with the variety and complexity of novel attack instances. As a remedy, we introduce a protocol-aware reverse HTTP proxy TokDoc (the token doctor), which intercepts requests and decides on a per-token basis whether a token requires automatic \"healing\". In particular, we propose an intelligent mangling technique, which, based on the decision of previously trained anomaly detectors, replaces suspicious parts in requests by benign data the system has seen in the past. Evaluation of our system in terms of accuracy is performed on two real-world data sets and a large variety of recent attacks. In comparison to state-of-the-art anomaly detectors, TokDoc is not only capable of detecting most attacks, but also significantly outperforms the other methods in terms of false positives. Runtime measurements show that our implementation can be deployed as an inline intrusion prevention system.",
"title": ""
},
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
},
{
"docid": "11d130f2b757bab08c4d41169c29b3d5",
"text": "We present an approach to training a joint syntactic and semantic parser that combines syntactic training information from CCGbank with semantic training information from a knowledge base via distant supervision. The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary. We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology. A semantic evaluation demonstrates that this parser produces logical forms better than both comparable prior work and a pipelined syntax-then-semantics approach. A syntactic evaluation on CCGbank demonstrates that the parser’s dependency Fscore is within 2.5% of state-of-the-art.",
"title": ""
},
{
"docid": "729b29b5ab44102541f3ebf8d24efec3",
"text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "00eaa437ad2821482644ee75cfe6d7b3",
"text": "A 65nm digitally-modulated polar transmitter incorporates a fully-integrated 2.4GHz efficient switching Inverse Class D power amplifier. Low power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54Mbps WLAN standard with 18% average efficiency.",
"title": ""
},
{
"docid": "8756441420669a6845254242030e0a79",
"text": "We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model uniquely integrates and learning of visual attention and Long Short Term Memory (LSTM) layers, which jointly learns the labels of interest and their co-occurrences, while the associated image regions are visually attended. Different from existing approaches utilize either model in their network architectures, training of our model does not require pre-defined label orders. Moreover, a robust inference process is introduced so that prediction errors would not propagate and thus affect the performance. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.",
"title": ""
},
{
"docid": "6987cb6d888d439220938d805cae29b0",
"text": "Entity Linking aims to link entity mentions in texts to knowledge bases, and neural models have achieved recent success in this task. However, most existing methods rely on local contexts to resolve entities independently, which may usually fail due to the data sparsity of local information. To address this issue, we propose a novel neural model for collective entity linking, named as NCEL. NCEL applies Graph Convolutional Network to integrate both local contextual features and global coherence information for entity linking. To improve the computation efficiency, we approximately perform graph convolution on a subgraph of adjacent entity mentions instead of those in the entire text. We further introduce an attention scheme to improve the robustness of NCEL to data noise and train the model on Wikipedia hyperlinks to avoid overfitting and domain bias. In experiments, we evaluate NCEL on five publicly available datasets to verify the linking performance as well as generalization ability. We also conduct an extensive analysis of time complexity, the impact of key modules, and qualitative results, which demonstrate the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "3840b8c709a8b2780b3d4a1b56bd986b",
"text": "A new scheme to resolve the intra-cell pilot collision for machine-to-machine (M2M) communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot. The simulation results coincide well with the analysis. It is also shown that, compared with the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.",
"title": ""
}
] |
scidocsrr
|
cc5c0ab4f614ed9d050a47dfa842d177
|
Supervised topic models for multi-label classification
|
[
{
"docid": "c44f060f18e55ccb1b31846e618f3282",
"text": "In multi-label classification, each sample can be associated with a set of class labels. When the number of labels grows to the hundreds or even thousands, existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are based either on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by an efficient randomized sampling procedure where the sampling probability of each class label reflects its importance among all the labels. Experiments on a number of realworld multi-label data sets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.",
"title": ""
}
] |
[
{
"docid": "d437e700df5c3a4d824b177c95def4ac",
"text": "In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant. Interactive theorem provers such as Coq enable users to construct machine-checkable proofs in a step-by-step manner. Hence, they provide an opportunity to explore theorem proving at a human level of abstraction. We use GamePad to synthesize proofs for a simple algebraic rewrite problem and train baseline models for a formalization of the Feit-Thompson theorem. We address position evaluation (i.e., predict the number of proof steps left) and tactic prediction (i.e., predict the next proof step) tasks, which arise naturally in human-level theorem proving.",
"title": ""
},
{
"docid": "d7acbf20753e2c9c50b2ab0683d7f03a",
"text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.",
"title": ""
},
{
"docid": "3e7a9fa9f575270a5cdf8f869d4a75dd",
"text": "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certaintydriven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain/reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.",
"title": ""
},
{
"docid": "4f6ce186679f9ab4f0aaada92ccf5a84",
"text": "Sensor networks have a significant potential in diverse applications some of which are already beginning to be deployed in areas such as environmental monitoring. As the application logic becomes more complex, programming difficulties are becoming a barrier to adoption of these networks. The difficulty in programming sensor networks is not only due to their inherently distributed nature but also the need for mechanisms to address their harsh operating conditions such as unreliable communications, faulty nodes, and extremely constrained resources. Researchers have proposed different programming models to overcome these difficulties with the ultimate goal of making programming easy while making full use of available resources. In this article, we first explore the requirements for programming models for sensor networks. Then we present a taxonomy of the programming models, classified according to the level of abstractions they provide. We present an evaluation of various programming models for their responsiveness to the requirements. Our results point to promising efforts in the area and a discussion of the future directions of research in this area.",
"title": ""
},
{
"docid": "993d7ee2498f7b19ae70850026c0a0c4",
"text": "We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.",
"title": ""
},
{
"docid": "65bf805e87a02c4e733c7e6cefbf8c7d",
"text": "We describe a nonlinear observer-based design for control of vehicle traction that is important in providing safety and obtaining desired longitudinal vehicle motion. First, a robust sliding mode controller is designed to maintain the wheel slip at any given value. Simulations show that longitudinal traction controller is capable of controlling the vehicle with parameter deviations and disturbances. The direct state feedback is then replaced with nonlinear observers to estimate the vehicle velocity from the output of the system (i.e., wheel velocity). The nonlinear model of the system is shown locally observable. The effects and drawbacks of the extended Kalman filters and sliding observers are shown via simulations. The sliding observer is found promising while the extended Kalman filter is unsatisfactory due to unpredictable changes in the road conditions.",
"title": ""
},
{
"docid": "3d3bc851a71f7caf96343004f1d584fe",
"text": "Next generation sequencing (NGS) has been leading the genetic study of human disease into an era of unprecedented productivity. Many bioinformatics pipelines have been developed to call variants from NGS data. The performance of these pipelines depends crucially on the variant caller used and on the calling strategies implemented. We studied the performance of four prevailing callers, SAMtools, GATK, glftools and Atlas2, using single-sample and multiple-sample variant-calling strategies. Using the same aligner, BWA, we built four single-sample and three multiple-sample calling pipelines and applied the pipelines to whole exome sequencing data taken from 20 individuals. We obtained genotypes generated by Illumina Infinium HumanExome v1.1 Beadchip for validation analysis and then used Sanger sequencing as a \"gold-standard\" method to resolve discrepancies for selected regions of high discordance. Finally, we compared the sensitivity of three of the single-sample calling pipelines using known simulated whole genome sequence data as a gold standard. Overall, for single-sample calling, the called variants were highly consistent across callers and the pairwise overlapping rate was about 0.9. Compared with other callers, GATK had the highest rediscovery rate (0.9969) and specificity (0.99996), and the Ti/Tv ratio out of GATK was closest to the expected value of 3.02. Multiple-sample calling increased the sensitivity. Results from the simulated data suggested that GATK outperformed SAMtools and glfSingle in sensitivity, especially for low coverage data. Further, for the selected discrepant regions evaluated by Sanger sequencing, variant genotypes called by exome sequencing versus the exome array were more accurate, although the average variant sensitivity and overall genotype consistency rate were as high as 95.87% and 99.82%, respectively. In conclusion, GATK showed several advantages over other variant callers for general purpose NGS analyses. The GATK pipelines we developed perform very well.",
"title": ""
},
{
"docid": "ce6041954779f1f5141cee0548ea8491",
"text": "In vivo exposure is the recommended treatment of choice for specific phobias; however, it demonstrates a high attrition rate and is not effective in all instances. The use of virtual reality (VR) has improved the acceptance of exposure treatments to some individuals. Augmented reality (AR) is a variation of VR wherein the user sees the real world augmented by virtual elements. The present study tests an AR system in the short (posttreatment) and long term (3, 6, and 12 months) for the treatment of cockroach phobia using a multiple baseline design across individuals (with 6 participants). The AR exposure therapy was applied using the \"one-session treatment\" guidelines developed by Ost, Salkovskis, and Hellström (1991). Results showed that AR was effective at treating cockroach phobia. All participants improved significantly in all outcome measures after treatment; furthermore, the treatment gains were maintained at 3, 6, and 12-month follow-up periods. This study discusses the advantages of AR as well as its potential applications.",
"title": ""
},
{
"docid": "4029bbbff0c115c8bf8c787cafc72ae0",
"text": "In recent times, data is growing rapidly in every domain such as news, social media, banking, education, etc. Due to the excessiveness of data, there is a need of automatic summarizer which will be capable to summarize the data especially textual data in original document without losing any critical purposes. Text summarization is emerged as an important research area in recent past. In this regard, review of existing work on text summarization process is useful for carrying out further research. In this paper, recent literature on automatic keyword extraction and text summarization are presented since text summarization process is highly depend on keyword extraction. This literature includes the discussion about different methodology used for keyword extraction and text summarization. It also discusses about different databases used for text summarization in several domains along with evaluation matrices. Finally, it discusses briefly about issues and research challenges faced by researchers along with future direction.",
"title": ""
},
{
"docid": "688ff3348e2d5af9b0f388fd9a99f1bf",
"text": "The core issue in this article is the empirical tracing of the connection between a variety of value orientations and the life course choices concerning living arrangements and family formation. The existence of such a connection is a crucial element in the socalled theory of the Second Demographic Transition (SDT). The underlying model is of a recursive nature and based on two effects: firstly, values-based self-selection of individuals into alternative living arrangement or household types, and secondly, event-based adaptation of values to the newly chosen household situation. Any testing of such a recursive model requires the use of panel data. Failing these, only “footprints” of the two effects can be derived and traced in cross-sectional data. Here, use is made of the latest round of the European Values Surveys of 1999-2000, mainly because no other source has such a large selection of value items. The comparison involves two Iberian countries, three western European ones, and two Scandinavian samples. The profiles of the value orientations are based on 80 items which cover a variety of dimensions (e.g. religiosity, ethics, civil morality, family values, social cohesion, expressive values, gender role orientations, trust in institutions, protest proneness and post-materialism, tolerance for minorities etc.). These are analysed according to eight different household positions based on the transitions to independent living, cohabitation and marriage, parenthood and union dissolution. Multiple Classification Analysis (MCA) is used to control for confounding effects of other relevant covariates (age, gender, education, economic activity and stratification, urbanity). Subsequently, 1 Interface Demography, Vrije Universiteit Brussel. E-mail: jrsurkyn@vub.ac.be 2 Interface Demography, Vrije Universiteit Brussel. E-mail: rlestha@vub.ac.be Demographic Research – Special Collection 3: Article 3 -Contemporary Research on European Fertility: Perspectives and Developments -46 http://www.demographic-research.org Correspondence Analysis is used to picture the proximities between the 80 value items and the eight household positions. Very similar value profiles according to household position are found for the three sets of countries, despite the fact that the onset of the SDT in Scandinavia precedes that in the Iberian countries by roughly twenty years. Moreover, the profile similarity remains intact when the comparison is extended to an extra group of seven formerly communist countries in central and Eastern Europe. Such pattern robustness is supportive of the contention that the ideational or “cultural” factor is indeed a nonredundant and necessary (but not a sufficient) element in the explanation of the demographic changes of the SDT. Moreover, the profile similarity also points in the direction of the operation of comparable mechanisms of selection and adaptation in the contrasting European settings. Demographic Research – Special Collection 3: Article 3 -Contemporary Research on European Fertility: Perspectives and Developments -http://www.demographic-research.org 47",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "6e05f588374b57f95524b04fe5600917",
"text": "Matrix factorization (MF) models and their extensions are standard in modern recommender systems. MF models decompose the observed user-item interaction matrix into user and item latent factors. In this paper, we propose a co-factorization model, CoFactor, which jointly decomposes the user-item interaction matrix and the item-item co-occurrence matrix with shared item latent factors. For each pair of items, the co-occurrence matrix encodes the number of users that have consumed both items. CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred factors and characterize the circumstances where it provides the most significant improvements.",
"title": ""
},
{
"docid": "058db5e1a8c58a9dc4b68f6f16847abc",
"text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.",
"title": ""
},
{
"docid": "f33134ec67d1237a39e91c0fd5bfb25a",
"text": "This research is driven by the assumption made in several user resistance studies that employees are generally resistant to change. It investigates the extent to which employees’ resistance to IT-induced change is caused by individuals’ predisposition to resist change. We develop a model of user resistance that assumes the influence of dispositional resistance to change on perceptual resistance to change, perceived ease of use, and usefulness, which in turn influence user resistance behavior. Using an empirical study of 106 HR employees forced to use a new human resources information system, the analysis reveals that 17.0–22.1 percent of the variance in perceived ease of use, usefulness, and perceptual resistance to change can be explained by the dispositional inclination to change initiatives. The four dimensions of dispositional resistance to change – routine seeking, emotional reaction, short-term focus and cognitive rigidity – have an even stronger effect than other common individual variables, such as age, gender, or working experiences. We conclude that dispositional resistance to change is an example of an individual difference that is instrumental in explaining a large proportion of the variance in beliefs about and user resistance to mandatory IS in organizations, which has implications for theory, practice, and future research. Journal of Information Technology advance online publication, 16 June 2015; doi:10.1057/jit.2015.17",
"title": ""
},
{
"docid": "e7e1fd16be5186474dc9e1690347716a",
"text": "One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that Stair-Net significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.",
"title": ""
},
{
"docid": "4d2bfda62140962af079817fc7dbd43e",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "0b01870332dd93897fbcecb9254c40b9",
"text": "Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.",
"title": ""
},
{
"docid": "bf239cb017be0b2137b0b4fd1f1d4247",
"text": "Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.",
"title": ""
},
{
"docid": "3e7adbc4ea0bb5183792efd19d3c23a5",
"text": "a Faculty of Science and Information Technology, Al-Zaytoona University of Jordan, Amman, Jordan b School of Informatics, University of Bradford, Bradford BD7 1DP, United Kingdom c Information & Computer Science Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia d Centre for excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW, United Kingdom",
"title": ""
},
{
"docid": "532f3aee6b67f1e521ccda7f77116f7a",
"text": "Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as \"work in progress.\" The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1idabstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on May 2008.",
"title": ""
}
] |
scidocsrr
|
4da8e5ddac2a648e63d7d5661a25ee65
|
Ethical Artificial Intelligence - An Open Question
|
[
{
"docid": "f76808350f95de294c2164feb634465a",
"text": "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: \"A curious aspect of the theory of evolution is that everybody thinks he understands it.\" (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about Artificial Intelligence than they actually do.",
"title": ""
}
] |
[
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "c0e1be5859be1fc5871993193a709f2d",
"text": "This paper reviews the possible causes and effects for no-fault-found observations and intermittent failures in electronic products and summarizes them into cause and effect diagrams. Several types of intermittent hardware failures of electronic assemblies are investigated, and their characteristics and mechanisms are explored. One solder joint intermittent failure case study is presented. The paper then discusses when no-fault-found observations should be considered as failures. Guidelines for assessment of intermittent failures are then provided in the discussion and conclusions. Ó 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "01638567bf915e26bf9398132ca27264",
"text": "Uncontrolled bleeding from the cystic artery and its branches is a serious problem that may increase the risk of intraoperative lesions to vital vascular and biliary structures. On laparoscopic visualization anatomic relations are seen differently than during conventional surgery, so proper knowledge of the hepatobiliary triangle anatomic structures under the conditions of laparoscopic visualization is required. We present an original classification of the anatomic variations of the cystic artery into two main groups based on our experience with 200 laparoscopic cholecystectomies, with due consideration of the known anatomicotopographic relations. Group I designates a cystic artery situated within the hepatobiliary triangle on laparoscopic visualization. This group included three types: (1) normally lying cystic artery, found in 147 (73.5%) patients; (2) most common cystic artery variation, manifesting as its doubling, present in 31 (15.5%) patients; and (3) the cystic artery originating from the aberrant right hepatic artery, observed in 11 (5.5%) patients. Group II designates a cystic artery that could not be found within the hepatobiliary triangle on laparoscopic dissection. This group included two types of variation: (1) cystic artery originating from the gastroduodenal artery, found in nine (4.5%) patients; and (2) cystic artery originating from the left hepatic artery, recorded in two (1%) patients.",
"title": ""
},
{
"docid": "2663800ed92ce1cd44ab1b7760c43e0f",
"text": "Synchronous reluctance motor (SynRM) have rather poor power factor. This paper investigates possible methods to improve the power factor (pf) without impacting its torque density. The study found two possible aspects to improve the power factor with either refining rotor dimensions and followed by current control techniques. Although it is a non-linear mathematical field, it is analysed by analytical equations and FEM simulation is utilized to validate the design progression. Finally, an analytical method is proposed to enhance pf without compromising machine torque density. There are many models examined in this study to verify the design process. The best design with high performance is used for final current control optimization simulation.",
"title": ""
},
{
"docid": "c9750e95b3bd422f0f5e73cf6c465b35",
"text": "Lingual nerve damage complicating oral surgery would sometimes require electrographic exploration. Nevertheless, direct recording of conduction in lingual nerve requires its puncture at the foramen ovale. This method is too dangerous to be practiced routinely in these diagnostic indications. The aim of our study was to assess spatial relationships between lingual nerve and mandibular ramus in the infratemporal fossa using an original technique. Therefore, ten lingual nerves were dissected on five fresh cadavers. All the nerves were catheterized with a 3/0 wire. After meticulous repositioning of the nerve and medial pterygoid muscle reinsertion, CT-scan examinations were performed with planar acquisitions and three-dimensional reconstructions. Localization of lingual nerve in the infratemporal fossa was assessed successively at the level of the sigmoid notch of the mandible, lingula and third molar. At the level of the lingula, lingual nerve was far from the maxillary vessels; mean distance between the nerve and the anterior border of the ramus was 19.6 mm. The posteriorly opened angle between the medial side of the ramus and the line joining the lingual nerve and the anterior border of the ramus measured 17°. According to these findings, we suggest that the lingual nerve might be reached through the intra-oral puncture at the intermaxillary commissure; therefore, we modify the inferior alveolar nerve block technique to propose a safe and reproducible protocol likely to be performed routinely as electrographic exploration of the lingual nerve. What is more, this original study protocol provided interesting educational materials and could be developed for the conception of realistic 3D virtual anatomy supports.",
"title": ""
},
{
"docid": "3d56f88bf8053258a12e609129237b19",
"text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.",
"title": ""
},
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "730d25d97f4ad67838a541f206cfcec2",
"text": "Semantic segmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionized the field of image semantic segmentation, its impact on point cloud data has been limited so far. Recent attempts, based on 3D deep learning approaches (3DCNNs), have achieved below-expected results. Such methods require voxelizations of the underlying point cloud data, leading to decreased spatial resolution and increased memory consumption. Additionally, 3D-CNNs greatly suffer from the limited availability of annotated datasets. In this paper, we propose an alternative framework that avoids the limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first project the point cloud onto a set of synthetic 2D-images. These images are then used as input to a 2D-CNN, designed for semantic segmentation. Finally, the obtained prediction scores are re-projected to the point cloud to obtain the segmentation results. We further investigate the impact of multiple modalities, such as color, depth and surface normals, in a multi-stream network architecture. Experiments are performed on the recent Semantic3D dataset. Our approach sets a new stateof-the-art by achieving a relative gain of 7.9%, compared to the previous best approach.",
"title": ""
},
{
"docid": "a3b18ade3e983d91b7a8fc8d4cb6a75d",
"text": "The IC stripline method is one of those suggested in IEC-62132 to evaluate the susceptibility of ICs to radiated electromagnetic interference. In practice, it allows the multiple injection of the interference through the capacitive and inductive coupling of the IC package with the guiding structure (the stripline) in which the device under test is inserted. The pros and cons of this method are discussed and a variant of it is proposed with the aim to address the main problems that arise when evaluating the susceptibility of ICs encapsulated in small packages.",
"title": ""
},
{
"docid": "385fc1f02645d4d636869317cde6d35e",
"text": "Events and their coreference offer useful semantic and discourse resources. We show that the semantic and discourse aspects of events interact with each other. However, traditional approaches addressed event extraction and event coreference resolution either separately or sequentially, which limits their interactions. This paper proposes a document-level structured learning model that simultaneously identifies event triggers and resolves event coreference. We demonstrate that the joint model outperforms a pipelined model by 6.9 BLANC F1 and 1.8 CoNLL F1 points in event coreference resolution using a corpus in the biology domain.",
"title": ""
},
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
},
{
"docid": "ac65c09468cd88765009abe49d9114cf",
"text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.",
"title": ""
},
{
"docid": "500eca6c6fb88958662fd0210927d782",
"text": "Purpose – Force output is extremely important for electromagnetic linear machines. The purpose of this study is to explore new permanent magnet (PM) array and winding patterns to increase the magnetic flux density and thus to improve the force output of electromagnetic tubular linear machines. Design/methodology/approach – Based on investigations on various PM patterns, a novel dual Halbach PM array is proposed in this paper to increase the radial component of flux density in three-dimensional machine space, which in turn can increase the force output of tubular linear machine significantly. The force outputs and force ripples for different winding patterns are formulated and analyzed, to select optimized structure parameters. Findings – The proposed dual Halbach array can increase the radial component of flux density and force output of tubular linear machines effectively. It also helps to decrease the axial component of flux density and thus to reduce the deformation and vibration of machines. By using analytical force models, the influence of winding patterns and structure parameters on the machine force output and force ripples can be analyzed. As a result, one set of optimized structure parameters are selected for the design of electromagnetic tubular linear machines. Originality/value – The proposed dual Halbach array and winding patterns are effective ways to improve the linear machine performance. It can also be implemented into rotary machines. The analyzing and design methods could be extended into the development of other electromagnetic machines.",
"title": ""
},
{
"docid": "91e9f3b1ebd57ff472ab8848370c366f",
"text": "Time series prediction problems are becoming increasingly high-dimensional in modern applications, such as climatology and demand forecasting. For example, in the latter problem, the number of items for which demand needs to be forecast might be as large as 50,000. In addition, the data is generally noisy and full of missing values. Thus, modern applications require methods that are highly scalable, and can deal with noisy data in terms of corruptions or missing values. However, classical time series methods usually fall short of handling these issues. In this paper, we present a temporal regularized matrix factorization (TRMF) framework which supports data-driven temporal learning and forecasting. We develop novel regularization schemes and use scalable matrix factorization methods that are eminently suited for high-dimensional time series data that has many missing values. Our proposed TRMF is highly general, and subsumes many existing approaches for time series analysis. We make interesting connections to graph regularization methods in the context of learning the dependencies in an autoregressive framework. Experimental results show the superiority of TRMF in terms of scalability and prediction quality. In particular, TRMF is two orders of magnitude faster than other methods on a problem of dimension 50,000, and generates better forecasts on real-world datasets such as Wal-mart E-commerce datasets.",
"title": ""
},
{
"docid": "19f9e643decc8047d73a20d664eb458d",
"text": "There is considerable federal interest in disaster resilience as a mechanism for mitigating the impacts to local communities, yet the identification of metrics and standards for measuring resilience remain a challenge. This paper provides a methodology and a set of indicators for measuring baseline characteristics of communities that foster resilience. By establishing baseline conditions, it becomes possible to monitor changes in resilience over time in particular places and to compare one place to another. We apply our methodology to counties within the Southeastern United States as a proof of concept. The results show that spatial variations in disaster resilience exist and are especially evident in the rural/urban divide, where metropolitan areas have higher levels of resilience than rural counties. However, the individual drivers of the disaster resilience (or lack thereof)—social, economic, institutional, infrastructure, and community capacities—vary",
"title": ""
},
{
"docid": "6751bfa8495065db8f6f5b396bbbc2cd",
"text": "This paper proposes a new balanced realization and model reduction method for possibly unstable systems by introducing some new controllability and observability Gramians. These Gramians can be related to minimum control energy and minimum estimation error. In contrast to Gramians defined in the literature for unstable systems, these Gramians can always be computed for systems without imaginary axis poles and they reduce to the standard controllability and observability Gramians when the systems are stable. The proposed balanced model reduction method enjoys the similar error bounds as does for the standard balanced model reduction. Furthermore, the new error bounds and the actual approximation errors seem to be much smaller than the ones using the methods given in the literature for unstable systems. Copyright ( 1999 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "5b9a08e4edd7e44ed261d304bc8f78c3",
"text": "Cone beam computed tomography (CBCT) has been specifically designed to produce undistorted three-dimensional information of the maxillofacial skeleton, including the teeth and their surrounding tissues with a significantly lower effective radiation dose compared with conventional computed tomography (CT). Periapical disease may be detected sooner using CBCT compared with periapical views and the true size, extent, nature and position of periapical and resorptive lesions can be assessed. Root fractures, root canal anatomy and the nature of the alveolar bone topography around teeth may be assessed. The aim of this paper is to review current literature on the applications and limitations of CBCT in the management of endodontic problems.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
},
{
"docid": "d6da3d9b1357c16bb2d9ea46e56fa60f",
"text": "The Supervisory Control and Data Acquisition System (SCADA) monitor and control real-time systems. SCADA systems are the backbone of the critical infrastructure, and any compromise in their security can have grave consequences. Therefore, there is a need to have a SCADA testbed for checking vulnerabilities and validating security solutions. In this paper we develop such a SCADA testbed.",
"title": ""
}
] |
scidocsrr
|
2100642ab81be76885180790c4aaaa95
|
Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics
|
[
{
"docid": "ed7a114d02244b7278c8872c567f1ba6",
"text": "We present a new visualization, called the Table Lens, for visualizing and making sense of large tables. The visualization uses a focus+context (fisheye) technique that works effectively on tabular information because it allows display of crucial label information and multiple distal focal areas. In addition, a graphical mapping scheme for depicting table contents has been developed for the most widespread kind of tables, the cases-by-variables table. The Table Lens fuses symbolic and graphical representations into a single coherent view that can be fluidly adjusted by the user. This fusion and interactivity enables an extremely rich and natural style of direct manipulation exploratory data analysis.",
"title": ""
}
] |
[
{
"docid": "8f2b9981d15b8839547f56f5f1152882",
"text": "In this paper we study how to discover the evolution of topics over time in a time-stamped document collection. Our approach is uniquely designed to capture the rich topology of topic evolution inherent in the corpus. Instead of characterizing the evolving topics at fixed time points, we conceptually define a topic as a quantized unit of evolutionary change in content and discover topics with the time of their appearance in the corpus. Discovered topics are then connected to form a topic evolution graph using a measure derived from the underlying document network. Our approach allows inhomogeneous distribution of topics over time and does not impose any topological restriction in topic evolution graphs. We evaluate our algorithm on the ACM corpus.\n The topic evolution graphs obtained from the ACM corpus provide an effective and concrete summary of the corpus with remarkably rich topology that are congruent to our background knowledge. In a finer resolution, the graphs reveal concrete information about the corpus that were previously unknown to us, suggesting the utility of our approach as a navigational tool for the corpus.",
"title": ""
},
{
"docid": "673ce42f089d555d8457f35bf7dcb733",
"text": "Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behaviour, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotation (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.",
"title": ""
},
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "8bcb5b946b9f5e07807ec9a44884cf4e",
"text": "Using data from two waves of a panel study of families who currently or recently received cash welfare benefits, we test hypotheses about the relationship between food hardships and behavior problems among two different age groups (458 children ages 3–5-and 747 children ages 6–12). Results show that food hardships are positively associated with externalizing behavior problems for older children, even after controlling for potential mediators such as parental stress, warmth, and depression. Food hardships are positively associated with internalizing behavior problems for older children, and with both externalizing and internalizing behavior problems for younger children, but these effects are mediated by parental characteristics. The implications of these findings for child and family interventions and food assistance programs are discussed. Food Hardships and Child Behavior Problems among Low-Income Children INTRODUCTION In the wake of the 1996 federal welfare reforms, several large-scale, longitudinal studies of welfare recipients and low-income families were launched with the intent of assessing direct benchmarks, such as work and welfare activity, over time, as well as indirect and unintended outcomes related to material hardship and mental health. One area of special concern to many researchers and policymakers alike is child well-being in the context of welfare reforms. As family welfare use and parental work activities change under new welfare policies, family income and material resources may also fluctuate. To the extent that family resources are compromised by changes in welfare assistance and earnings, children may experience direct hardships, such as instability in food consumption, which in turn may affect other areas of functioning. It is also possible that changes in parental work and family welfare receipt influence children indirectly through their caregivers. As parents themselves experience hardships or new stresses, their mental health and interactions with their children may change, which in turn could affect their children’s functioning. This research assesses whether one particular form of hardship, food hardship, is associated with adverse behaviors among low-income children. Specifically, analyses assess whether food hardships have relationships with externalizing (e.g., aggressive or hyperactive) and internalizing (e.g., anxietyand depression-related) child behavior problems, and whether associations between food hardships and behavior problems are mediated by parental stress, warmth, and depression. The study involves a panel survey of individuals in one state who were receiving Temporary Assistance for Needy Families (TANF) in 1998 and were caring for minor-aged children. Externalizing and internalizing behavior problems associated with a randomly selected child from each household are assessed in relation to key predictors, taking advantage of the prospective study design. 2 BACKGROUND Food hardships have been conceptualized by researchers in various ways. For example, food insecurity is defined by the U.S. Department of Agriculture (USDA) as the “limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways” (Bickel, Nord, Price, Hamilton, and Cook, 2000, p. 6). An 18-item scale was developed by the USDA to assess household food insecurity with and without hunger, where hunger represents a potential result of more severe forms of food insecurity, but not a necessary condition for food insecurity to exist (Price, Hamilton, and Cook, 1997). Other researchers have used selected items from the USDA Food Security Module to assess food hardships (Nelson, 2004; Bickel et al., 2000) The USDA also developed the following single-item question to identify food insufficiency: “Which of the following describes the amount of food your household has to eat....enough to eat, sometimes not enough to eat, or often not enough to eat?” This measure addresses the amount of food available to a household, not assessments about the quality of the food consumed or worries about food (Alaimo, Olson and Frongillo, 1999; Dunifon and Kowaleski-Jones, 2003). The Community Childhood Hunger Identification Project (CCHIP) assesses food hardships using an 8-item measure to determine whether the household as a whole, adults as individuals, or children are affected by food shortages, perceived food insufficiency, or altered food intake due to resource constraints (Wehler, Scott, and Anderson, 1992). Depending on the number of affirmative answers, respondents are categorized as either “hungry,” “at-risk for hunger,” or “not hungry” (Wehler et al., 1992; Kleinman et al., 1998). Other measures, such as the Radimer/Cornell measures of hunger and food insecurity, have also been created to measure food hardships (Kendall, Olson, and Frongillo, 1996). In recent years, food hardships in the United States have been on the rise. After declining from 1995 to 1999, the prevalence of household food insecurity in households with children rose from 14.8 percent in 1999 to 16.5 percent in 2002, and the prevalence of household food insecurity with hunger in households with children rose from 0.6 percent in 1999 to 0.7 percent in 2002 (Nord, Andrews, and 3 Carlson, 2003). A similar trend was also observed using a subset of questions from the USDA Food Security Module (Nelson, 2004). Although children are more likely than adults to be buffered from household food insecurity (Hamilton et al., 1997) and inadequate nutrition (McIntyre et al., 2003), a concerning number of children are reported to skip meals or have reduced food intake due to insufficient household resources. Nationally, children in 219,000 U.S. households were hungry at times during the 12 months preceding May 1999 (Nord and Bickel, 2002). Food Hardships and Child Behavior Problems Very little research has been conducted on the effects of food hardship on children’s behaviors, although the existing research suggests that it is associated with adverse behavioral and mental health outcomes for children. Using data from the National Health and Nutrition Examination Survey (NHANES), Alaimo and colleagues (2001a) found that family food insufficiency is positively associated with visits to a psychologist among 6to 11year-olds. Using the USDA Food Security Module, Reid (2002) found that greater severity and longer periods of children’s food insecurity were associated with greater levels of child behavior problems. Dunifon and Kowaleski-Jones (2003) found, using the same measure, that food insecurity is associated with fewer positive behaviors among school-age children. Children from households with incomes at or below 185 percent of the poverty level who are identified as hungry are also more likely to have a past or current history of mental health counseling and to have more psychosocial dysfunctions than children who are not identified as hungry (Kleinman et al., 1998; Murphy et al., 1998). Additionally, severe child hunger in both pre-school-age and school-age children is associated with internalizing behavior problems (Weinreb et al., 2002), although Reid (2002) found a stronger association between food insecurity and externalizing behaviors than between food insecurity and internalizing behaviors among children 12 and younger. Other research on hunger has identified several adverse behavioral consequences for children (See Wachs, 1995 for a review; Martorell, 1996; Pollitt, 1994), including poor play behaviors, poor preschool achievement, and poor scores on 4 developmental indices (e.g., Bayley Scores). These studies have largely taken place in developing countries, where the prevalence of hunger and malnutrition is much greater than in the U.S. population (Reid, 2002), so it is not known whether similar associations would emerge for children in the United States. Furthermore, while existing studies point to a relationship between food hardships and adverse child behavioral outcomes, limitations in design stemming from cross-sectional data, reliance on singleitem measures of food difficulties, or failure to adequately control for factors that may confound the observed relationships make it difficult to assess the robustness of the findings. For current and recent recipients of welfare and their families, increased food hardships are a potential problem, given the fluctuations in benefits and resources that families are likely to experience as a result of legislative reforms. To the extent that food hardships are tied to economic factors, we may expect levels of food hardships to increase for families who experience periods of insufficient material resources, and to decrease for families whose economic situations improve. If levels of food hardship are associated with the availability of parents and other caregivers, we may find that the provision of food to children changes as parents work more hours, or as children spend more time in alternative caregiving arrangements. Poverty and Child Behavior Problems When exploring the relationship between food hardships and child well-being, it is crucial to ensure that factors associated with economic hardship and poverty are adequately controlled, particularly since poverty has been linked to some of the same outcomes as food hardships. Extensive research has shown a higher prevalence of behavior problems among children from families of lower socioeconomic status (McLoyd, 1998; Duncan, Brooks-Gunn, and Klebanov, 1994), and from families receiving welfare (Hofferth, Smith, McLoyd, and Finkelstein, 2000). This relationship has been shown to be stronger among children in single-parent households than among those in two-parent households (Hanson, McLanahan, and Thompson, 1996), and among younger children (Bradley and Corwyn, 2002; McLoyd, 5 1998), with less consistent findings for adolescents (Conger, Conger, and Elder, 1997; Elder, N",
"title": ""
},
{
"docid": "df02dafb455e2b68035cf8c150e28a0a",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "cdda683f089f630176b88c1b91c1cff2",
"text": "Article history: Received 15 March 2011 Received in revised form 28 November 2011 Accepted 23 December 2011 Available online 29 December 2011",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "c3f81c5e4b162564b15be399b2d24750",
"text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.",
"title": ""
},
{
"docid": "7490d342ffb59bd396421e198b243775",
"text": "Antioxidant activities of defatted sesame meal extract increased as the roasting temperature of sesame seed increased, but the maximum antioxidant activity was achieved when the seeds were roasted at 200 °C for 60 min. Roasting sesame seeds at 200 °C for 60 min significantly increased the total phenolic content, radical scavenging activity (RSA), reducing powers, and antioxidant activity of sesame meal extract; and several low-molecularweight phenolic compounds such as 2-methoxyphenol, 4-methoxy-3-methylthio-phenol, 5-amino-3-oxo-4hexenoic acid, 3,4-methylenedioxyphenol (sesamol), 3-hydroxy benzoic acid, 4-hydroxy benzoic acid, vanillic acid, filicinic acid, and 3,4-dimethoxy phenol were newly formed in the sesame meal after roasting sesame seeds at 200 °C for 60 min. These results indicate that antioxidant activity of defatted sesame meal extracts was significantly affected by roasting temperature and time of sesame seeds.",
"title": ""
},
{
"docid": "44d8cb42bd4c2184dc226cac3adfa901",
"text": "Several descriptions of redundancy are presented in the literature , often from widely dif ferent perspectives . Therefore , a discussion of these various definitions and the salient points would be appropriate . In particular , any definition and redundancy needs to cover the following issues ; the dif ference between multiple solutions and an infinite number of solutions ; degenerate solutions to inverse kinematics ; task redundancy ; and the distinction between non-redundant , redundant and highly redundant manipulators .",
"title": ""
},
{
"docid": "dcf7214c15c13f13d33c9a7b2c216588",
"text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.",
"title": ""
},
{
"docid": "74af567f4b0257dc12c3346146c0f46c",
"text": "This paper presents the experimental data of human mechanical impedance properties (HMIPs) of the arms measured in steering operations according to the angle of a steering wheel (limbs posture) and the steering torque (muscle cocontraction). The HMIP data show that human stiffness/viscosity has the minimum/maximum value at the neutral angle of the steering wheel in relax (standard condition) and increases/decreases for the amplitude of the steering angle and the torque, and that the stability of the arms' motion in handling the steering wheel becomes high around the standard condition. Next, a novel methodology for designing an adaptive steering control system based on the HMIPs of the arms is proposed, and the effectiveness was then demonstrated via a set of double-lane-change tests, with several subjects using the originally developed stationary driving simulator and the 4-DOF driving simulator with a movable cockpit.",
"title": ""
},
{
"docid": "f5648e3bd38e876b53ee748021e165f2",
"text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "c3566171b68e4025931a72064e74e4ae",
"text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.",
"title": ""
},
{
"docid": "363872994876ab6c68584d4f31913b43",
"text": "The Internet is quickly becoming the world’s largest public electronic marketplace. It is estimated to reach 50 million people worldwide, with growth estimates averaging approximately 10% per month. Innovative business professionals have discovered that the Internet can A BUYER’S-EYE VIEW OF ONLINE PURCHASING WORRIES. • H U A I Q I N G W A N G , M A T T H E W K . O . L E E , A N D C H E N W A N G •",
"title": ""
},
{
"docid": "0d9420b97012ce445fdf39fb009e32c4",
"text": "Greater numbers of young children with complicated, serious physical health, mental health, or developmental problems are entering foster care during the early years when brain growth is most active. Every effort should be made to make foster care a positive experience and a healing process for the child. Threats to a child’s development from abuse and neglect should be understood by all participants in the child welfare system. Pediatricians have an important role in assessing the child’s needs, providing comprehensive services, and advocating on the child’s behalf. The developmental issues important for young children in foster care are reviewed, including: 1) the implications and consequences of abuse, neglect, and placement in foster care on early brain development; 2) the importance and challenges of establishing a child’s attachment to caregivers; 3) the importance of considering a child’s changing sense of time in all aspects of the foster care experience; and 4) the child’s response to stress. Additional topics addressed relate to parental roles and kinship care, parent-child contact, permanency decision-making, and the components of comprehensive assessment and treatment of a child’s development and mental health needs. More than 500 000 children are in foster care in the United States.1,2 Most of these children have been the victims of repeated abuse and prolonged neglect and have not experienced a nurturing, stable environment during the early years of life. Such experiences are critical in the shortand long-term development of a child’s brain and the ability to subsequently participate fully in society.3–8 Children in foster care have disproportionately high rates of physical, developmental, and mental health problems1,9 and often have many unmet medical and mental health care needs.10 Pediatricians, as advocates for children and their families, have a special responsibility to evaluate and help address these needs. Legal responsibility for establishing where foster children live and which adults have custody rests jointly with the child welfare and judiciary systems. Decisions about assessment, care, and planning should be made with sufficient information about the particular strengths and challenges of each child. Pediatricians have an important role in helping to develop an accurate, comprehensive profile of the child. To create a useful assessment, it is imperative that complete health and developmental histories are available to the pediatrician at the time of these evaluations. Pediatricians and other professionals with expertise in child development should be proactive advisors to child protection workers and judges regarding the child’s needs and best interests, particularly regarding issues of placement, permanency planning, and medical, developmental, and mental health treatment plans. For example, maintaining contact between children and their birth families is generally in the best interest of the child, and such efforts require adequate support services to improve the integrity of distressed families. However, when keeping a family together may not be in the best interest of the child, alternative placement should be based on social, medical, psychological, and developmental assessments of each child and the capabilities of the caregivers to meet those needs. Health care systems, social services systems, and judicial systems are frequently overwhelmed by their responsibilities and caseloads. Pediatricians can serve as advocates to ensure each child’s conditions and needs are evaluated and treated properly and to improve the overall operation of these systems. Availability and full utilization of resources ensure comprehensive assessment, planning, and provision of health care. Adequate knowledge about each child’s development supports better placement, custody, and treatment decisions. Improved programs for all children enhance the therapeutic effects of government-sponsored protective services (eg, foster care, family maintenance). The following issues should be considered when social agencies intervene and when physicians participate in caring for children in protective services. EARLY BRAIN AND CHILD DEVELOPMENT More children are entering foster care in the early years of life when brain growth and development are most active.11–14 During the first 3 to 4 years of life, the anatomic brain structures that govern personality traits, learning processes, and coping with stress and emotions are established, strengthened, and made permanent.15,16 If unused, these structures atrophy.17 The nerve connections and neurotransmitter networks that are forming during these critical years are influenced by negative environmental conditions, including lack of stimulation, child abuse, or violence within the family.18 It is known that emotional and cognitive disruptions in the early lives of children have the potential to impair brain development.18 Paramount in the lives of these children is their need for continuity with their primary attachment figures and a sense of permanence that is enhanced The recommendations in this statement do not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. PEDIATRICS (ISSN 0031 4005). Copyright © 2000 by the American Acad-",
"title": ""
},
{
"docid": "5d98548bc4f65d66a8ece7e70cb61bc4",
"text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.09.003 ⇑ Corresponding author. Tel.: +86 10 62283240. E-mail address: liwenmin02@hotmail.com (W. Li). Value-added applications in vehicular ad hoc network (VANET) come with the emergence of electronic trading. The restricted connectivity scenario in VANET, where the vehicle cannot communicate directly with the bank for authentication due to the lack of internet access, opens up new security challenges. Hence a secure payment protocol, which meets the additional requirements associated with VANET, is a must. In this paper, we propose an efficient and secure payment protocol that aims at the restricted connectivity scenario in VANET. The protocol applies self-certified key agreement to establish symmetric keys, which can be integrated with the payment phase. Thus both the computational cost and communication cost can be reduced. Moreover, the protocol can achieve fair exchange, user anonymity and payment security. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "64de7935c22f74069721ff6e66a8fe8c",
"text": "In the setting of secure multiparty computation, a set of n parties with private inputs wish to jointly compute some functionality of their inputs. One of the most fundamental results of secure computation was presented by Ben-Or, Goldwasser, and Wigderson (BGW) in 1988. They demonstrated that any n-party functionality can be computed with perfect security, in the private channels model. When the adversary is semi-honest, this holds as long as $$t<n/2$$ t < n / 2 parties are corrupted, and when the adversary is malicious, this holds as long as $$t<n/3$$ t < n / 3 parties are corrupted. Unfortunately, a full proof of these results was never published. In this paper, we remedy this situation and provide a full proof of security of the BGW protocol. This includes a full description of the protocol for the malicious setting, including the construction of a new subprotocol for the perfect multiplication protocol that seems necessary for the case of $$n/4\\le t<n/3$$ n / 4 ≤ t < n / 3 .",
"title": ""
},
{
"docid": "9b19f343a879430283881a69e3f9cb78",
"text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.",
"title": ""
},
{
"docid": "04d8cd068da3aa0a7ede285de372a139",
"text": "Testing is a major cost factor in software development. Test automation has been proposed as one solution to reduce these costs. Test automation tools promise to increase the number of tests they run and the frequency at which they run them. So why not automate every test? In this paper we discuss the question \"When should a test be automated?\" and the trade-off between automated and manual testing. We reveal problems in the overly simplistic cost models commonly used to make decisions about automating testing. We introduce an alternative model based on opportunity cost and present influencing factors on the decision of whether or not to invest in test automation. Our aim is to stimulate discussion about these factors as well as their influence on the benefits and costs of automated testing in order to support researchers and practitioners reflecting on proposed automation approaches.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.