query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0804f550200d7a87f51906a97932a9c6
|
Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
|
[
{
"docid": "19a1f9c9f3dec6f90d08479f0669d0dc",
"text": "We present a multi-stream bi-directional recurrent neural network for fine-grained action detection. Recently, twostream convolutional neural networks (CNNs) trained on stacked optical flow and image frames have been successful for action recognition in videos. Our system uses a tracking algorithm to locate a bounding box around the person, which provides a frame of reference for appearance and motion and also suppresses background noise that is not within the bounding box. We train two additional streams on motion and appearance cropped to the tracked bounding box, along with full-frame streams. Our motion streams use pixel trajectories of a frame as raw features, in which the displacement values corresponding to a moving scene point are at the same spatial position across several frames. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. We show that our bi-directional LSTM network utilizes about 8 seconds of the video sequence to predict an action label. We test on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we introduce and make available to the community with this paper. The results demonstrate that our method significantly outperforms state-of-the-art action detection methods on both datasets.",
"title": ""
},
{
"docid": "875e98c4bd34e8c4131467a632b7d68f",
"text": "Human activity recognition is a challenging task, especially when its background is unknown or changing, and when scale or illumination differs in each video. Approaches utilizing spatio-temporal local features have proved that they are able to cope with such difficulties, but they mainly focused on classifying short videos of simple periodic actions. In this paper, we present a new activity recognition methodology that overcomes the limitations of the previous approaches using local features. We introduce a novel matching, spatio-temporal relationship match, which is designed to measure structural similarity between sets of features extracted from two videos. Our match hierarchically considers spatio-temporal relationships among feature points, thereby enabling detection and localization of complex non-periodic activities. In contrast to previous approaches to ‘classify’ videos, our approach is designed to ‘detect and localize’ all occurring activities from continuous videos where multiple actors and pedestrians are present. We implement and test our methodology on a newly-introduced dataset containing videos of multiple interacting persons and individual pedestrians. The results confirm that our system is able to recognize complex non-periodic activities (e.g. ‘push’ and ‘hug’) from sets of spatio-temporal features even when multiple activities are present in the scene",
"title": ""
}
] |
[
{
"docid": "afe26c28b56a511452096bfc211aed97",
"text": "System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.",
"title": ""
},
{
"docid": "af2f9dd69e90ed3c61e09b5b53fa1cdb",
"text": "Cellular networks are one of the cornerstones of our information-driven society. However, existing cellular systems have been seriously challenged by the explosion of mobile data traffic, the emergence of machine-type communications, and the flourishing of mobile Internet services. In this article, we propose CONCERT, a converged edge infrastructure for future cellular communications and mobile computing services. The proposed architecture is constructed based on the concept of control/data (C/D) plane decoupling. The data plane includes heterogeneous physical resources such as radio interface equipment, computational resources, and software-defined switches. The control plane jointly coordinates physical resources to present them as virtual resources, over which software-defined services including communications, computing, and management can be deployed in a flexible manner. Moreover, we introduce new designs for physical resources placement and task scheduling so that CONCERT can overcome the drawbacks of the existing baseband-up centralization approach and better facilitate innovations in next-generation cellular networks. These advantages are demonstrated with application examples on radio access networks with C/D decoupled air interface, delaysensitive machine-type communications, and realtime mobile cloud gaming. We also discuss some fundamental research issues arising with the proposed architecture to illuminate future research directions.",
"title": ""
},
{
"docid": "cac6da8b7ee88f95196651920a64486c",
"text": "The classification of food images is an interesting and challenging problem since the high variability of the image content which makes the task difficult for current state-of-the-art classification methods. The image representation to be employed in the classification engine plays an important role. We believe that texture features have been not properly considered in this application domain. This paper points out, through a set of experiments, that textures are fundamental to properly recognize different food items. For this purpose the bag of visual words model (BoW) is employed. Images are processed with a bank of rotation and scale invariant filters and then a small codebook of Textons is built for each food class. The learned class-based Textons are hence collected in a single visual dictionary. The food images are represented as visual words distributions (Bag of Textons) and a Support Vector Machine is used for the classification stage. The experiments demonstrate that the image representation based on Bag of Textons is more accurate than existing (and more complex) approaches in classifying the 61 classes of the Pittsburgh Fast-Food Image Dataset.",
"title": ""
},
{
"docid": "273bd65511ef2f7ef61e75e6272079b6",
"text": "The capacity of Mobile Health (mHealth) technologies to propel healthcare forward is directly linked to the quality of mobile interventions developed through careful mHealth research. mHealth research entails several unique characteristics, including collaboration with technologists at all phases of a project, reliance on regional telecommunication infrastructure and commercial mobile service providers, and deployment and evaluation of interventions “in the wild”, with participants using mobile tools in uncontrolled environments. In the current paper, we summarize the lessons our multi-institutional/multi-disciplinary team has learned conducting a range of mHealth projects using mobile phones with diverse clinical populations. First, we describe three ongoing projects that we draw from to illustrate throughout the paper. We then provide an example for multidisciplinary teamwork and conceptual mHealth intervention development that we found to be particularly useful. Finally, we discuss mHealth research challenges (i.e. evolving technology, mobile phone selection, user characteristics, the deployment environment, and mHealth system “bugs and glitches”), and provide recommendations for identifying and resolving barriers, or preventing their occurrence altogether.",
"title": ""
},
{
"docid": "a4f7f7d82264a1d5d64ea8f574d326e6",
"text": "A journal co-citation analysis of fifty journals and other publications in the information retrieval (IR) discipline was conducted over three periods spanning the years of 1987 to 1997. Relevant data retrieved from the Science Citation Index (SCI) and Social Science Citation Index (SSCI) are analysed according to the highly cited journals in various disciplines, especially in the Library & Information Science area. The results are compared with previous research that covered the data only from the Social Science Citation Index (SSCI). The analysis reveals that there is no distinct difference between these two sets of results. The results of current study show that IR speciality is multi-disciplinary with broad relations with other specialities. The field of IR is a mature field, as the journals used for research communication remained quite stable during the study period.",
"title": ""
},
{
"docid": "672ac3cd042179cf797b97ac7359ed3e",
"text": "Many time series data mining problems require subsequence similarity search as a subroutine. Dozens of similarity/distance measures have been proposed in the last decade and there is increasing evidence that Dynamic Time Warping (DTW) is the best measure across a wide range of domains. Given DTW’s usefulness and ubiquity, there has been a large community-wide effort to mitigate its relative lethargy. Proposed speedup techniques include early abandoning strategies, lower-bound based pruning, indexing and embedding. In this work we argue that we are now close to exhausting all possible speedup from software, and that we must turn to hardware-based solutions. With this motivation, we investigate both GPU (Graphics Processing Unit) and FPGA (Field Programmable Gate Array) based acceleration of subsequence similarity search under the DTW measure. As we shall show, our novel algorithms allow GPUs to achieve two orders of magnitude speedup and FPGAs to produce four orders of magnitude speedup. We conduct detailed case studies on the classification of astronomical observations and demonstrate that our ideas allow us to tackle problems that would be untenable otherwise.",
"title": ""
},
{
"docid": "7963adab39b58ab0334b8eef4149c59c",
"text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.",
"title": ""
},
{
"docid": "138ec2c4e9fa1a690d2b49b5b78340a1",
"text": "In the context of the development of prototypic assessment instruments in the areas of cognition, personality, and adaptive functioning, the issues of standardization, norming procedures, and the important psychometrics of test reliability and validity are evaluated critically. Criteria, guidelines, and simple rules of thumb are provided to assist the clinician faced with the challenge of choosing an appropriate test instrument for a given psychological assessment.",
"title": ""
},
{
"docid": "767165da67d53cc83cac28f693fc3b01",
"text": "This paper introduces an effective hybrid scheme for the denoising of electrocardiogram (ECG) signals corrupted by non-stationary noises using genetic algorithm (GA) and wavelet transform (WT). We first applied a wavelet denoising in noise reduction of multi-channel high resolution ECG signals. In particular, the influence of the selection of wavelet function and the choice of decomposition level on efficiency of denoising process was considered. Selection of a suitable wavelet denoising parameters is critical for the success of ECG signal filtration in wavelet domain. Therefore, in our noise elimination method the genetic algorithm has been used to select the optimal wavelet denoising parameters which lead to maximize the filtration performance. The efficiency performance of our scheme is evaluated using percentage root mean square difference (PRD) and signal to noise ratio (SNR). The experimental results show that the introduced hybrid scheme using GA has obtain better performance than the other reported wavelet thresholding algorithms as well as the quality of the denoising ECG signal is more suitable for the clinical diagnosis.",
"title": ""
},
{
"docid": "44a4fb2e14de16ae13ab072dc72018fb",
"text": "Objective: The purpose of this contribution is to estimate the path loss of capacitive human body communication (HBC) systems under practical conditions. Methods: Most prior work utilizes large grounded instruments to perform path loss measurements, resulting in overly optimistic path loss estimates for wearable HBC devices. In this paper, small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions. A hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems. Results: Measurements from form-factor-accurate prototypes reveal path loss results between 31.7 and 42.2 dB from 20 to 150 MHz. Simulation results matched measurements within 2.5 dB. Comeasurements using large grounded benchtop vector network analyzer (VNA) and large battery-powered spectrum analyzer (SA) underestimate path loss by up to 33.6 and 8.2 dB, respectively. Measurements utilizing a VNA with baluns, or large battery-powered SAs with baluns still underestimate path loss by up to 24.3 and 6.7 dB, respectively. Conclusion: Measurements of path loss in capacitive HBC systems strongly depend on instrumentation configurations. It is thus imperative to simulate or measure path loss in capacitive HBC systems utilizing realistic geometries and grounding configurations. Significance: HBC has a great potential for many emerging wearable devices and applications; accurate path loss estimation will improve system-level design leading to viable products.",
"title": ""
},
{
"docid": "f95bc42d41f4c7448950fa4e1a47ac9a",
"text": "In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ‘NarrativeQA’. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62% (ROUGE-L) relative improvement.",
"title": ""
},
{
"docid": "b436f69bfc140d417a889a456abb6b8d",
"text": "In scientific research, it is often difficult to express information needs as simple keyword queries. We present a more natural way of searching for relevant scientific literature. Rather than a string of keywords, we define a query as a small set of papers deemed relevant to the research task at hand. By optimizing an objective function based on a fine-grained notion of influence between documents, our approach efficiently selects a set of highly relevant articles. Moreover, as scientists trust some authors more than others, results are personalized to individual preferences. In a user study, researchers found the papers recommended by our method to be more useful, trustworthy and diverse than those selected by popular alternatives, such as Google Scholar and a state-of-the-art topic modeling approach.",
"title": ""
},
{
"docid": "04b14e2795afc0faaa376bc17ead0aaf",
"text": "In this paper, an integrated MEMS gyroscope array method composed of two levels of optimal filtering was designed to improve the accuracy of gyroscopes. In the firstlevel filtering, several identical gyroscopes were combined through Kalman filtering into a single effective device, whose performance could surpass that of any individual sensor. The key of the performance improving lies in the optimal estimation of the random noise sources such as rate random walk and angular random walk for compensating the measurement values. Especially, the cross correlation between the noises from different gyroscopes of the same type was used to establish the system noise covariance matrix and the measurement noise covariance matrix for Kalman filtering to improve the performance further. Secondly, an integrated Kalman filter with six states was designed to further improve the accuracy with the aid of external sensors such as magnetometers and accelerometers in attitude determination. Experiments showed that three gyroscopes with a bias drift of 35 degree per hour could be combined into a virtual gyroscope with a drift of 1.07 degree per hour through the first-level filter, and the bias drift was reduced to 0.53 degree per hour after the second-level filtering. It proved that the proposed integrated MEMS gyroscope array is capable of improving the accuracy of the MEMS gyroscopes, which provides the possibility of using these low cost MEMS sensors in high-accuracy application areas.",
"title": ""
},
{
"docid": "531aad1188cb41024ce0e3f397e35252",
"text": "CMF is a technique for simultaneously learning low-rank representations based on a collection of matrices with shared entities. A typical example is the joint modeling of useritem, item-property, and user-feature matrices in a recommender system. The key idea in CMF is that the embeddings are shared across the matrices, which enables transferring information between them. The existing solutions, however, break down when the individual matrices have low-rank structure not shared with others. In this work we present a novel CMF solution that allows each of the matrices to have a separate low-rank structure that is independent of the other matrices, as well as structures that are shared only by a subset of them. We compare MAP and variational Bayesian solutions based on alternating optimization algorithms and show that the model automatically infers the nature of each factor using group-wise sparsity. Our approach supports in a principled way continuous, binary and count observations and is efficient for sparse matrices involving missing data. We illustrate the solution on a number of examples, focusing in particular on an interesting use-case of augmented multi-view learning.",
"title": ""
},
{
"docid": "376471fa0c721de5a319e990a5dbccc8",
"text": "The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings.",
"title": ""
},
{
"docid": "fde9d6a4fc1594a1767e84c62c7d3b89",
"text": "This paper explores the effects of emotions embedded in a seller review on its perceived helpfulness to readers. Drawing on frameworks in literature on emotion and cognitive processing, we propose that over and above a well-known negativity bias, the impact of discrete emotions in a review will vary, and that one source of this variance is reader perceptions of reviewers’ cognitive effort. We focus on the roles of two distinct, negative emotions common to seller reviews: anxiety and anger. In the first two studies, experimental methods were utilized to identify and explain the differential impact of anxiety and anger in terms of perceived reviewer effort. In the third study, seller reviews from Yahoo! Shopping web sites were collected to examine the relationship between emotional review content and helpfulness ratings. Our findings demonstrate the importance of examining discrete emotions in online word-of-mouth, and they carry important practical implications for consumers and online retailers.",
"title": ""
},
{
"docid": "92dd1a9575cd7f733660fb4772c56e02",
"text": "A novel charge-imbalance termination region for high-voltage trench superjunction (SJ) vertical diffused MOSFETs (SJ-VDMOSs) is proposed and discussed in this letter. Its breakdown characteristics are investigated theoretically and experimentally. A simple and meaningful analytical-solution method is proposed, and it agrees with the simulation and experimental results. As a result, the novel imbalance termination can suppress the edge-drift potential more effectively than the conventional one in the off state. When the trench SJ-VDMOS was compared with a conventional termination structure of the same size, the device improved the breakdown voltage (BV) by about 8% using the proposed termination structure. Experimentally, a BV of 715 V was obtained in the trench SJ-VDMOS with a 35-μm trench on a 45-μm epitaxial layer and a 90- μm termination region.",
"title": ""
},
{
"docid": "91962ae5eef24706a9644957dda5b539",
"text": "This demonstration is based on the wafer-scale neuromophic system presented in the previous papers by Schemmel et. al. (20120), Scholze et. al. (2011) and Millner et. al. (2010). The demonstration setup will allow the visitors to monitor and partially manipulate the neural events at every level. They will get an insight into the complex interplay between packet-based and realtime communication necessary to combine continuous-time mixed-signal neural networks with a packet-based transport network. Several network experiments implemented on the setup will be accessible for user interaction.",
"title": ""
},
{
"docid": "a7bf370e83bd37ed4f83c3846cfaaf97",
"text": "This paper presents the design and implementation of an evanescent tunable combline filter based on electronic tuning with the use of RF-MEMS capacitor banks. The use of MEMS tuning circuit results in the compact implementation of the proposed filter with high-Q and near to zero DC power consumption. The proposed filter consist of combline resonators with tuning disks that are loaded with RF-MEMS capacitor banks. A two-pole filter is designed and measured based on the proposed tuning concept. The filter operates at 2.5 GHz with a bandwidth of 22 MHz. Measurement results demonstrate a tuning range of 110 MHz while the quality factor is above 374 (1300–374 over the tuning range).",
"title": ""
}
] |
scidocsrr
|
09d2fd2ee581d160a029aa138efd5d59
|
A secure distributed framework for achieving k-anonymity
|
[
{
"docid": "21a356afff7c7b31895a3c11c2231d28",
"text": "There has been concern over the apparent conflict between privacy and data mining. There is no inherent conflict, as most types of data mining produce summary results that do not reveal information about individuals. The process of data mining may use private data, leading to the potential for privacy breaches. Secure Multiparty Computation shows that results can be produced without revealing the data used to generate them. The problem is that general techniques for secure multiparty computation do not scale to data-mining size computations. This paper presents an efficient protocol for securely determining the size of set intersection, and shows how this can be used to generate association rules where multiple parties have different (and private) information about the same set of individuals.",
"title": ""
},
{
"docid": "83f59014cebd1f0fb65d76b7239194e1",
"text": "The increase in volume and sensitivity of data communicated and processed over the Internet has been accompanied by a corresponding need for e-commerce techniques in which entities can participate in a secure and anonymous fashion. Even simple arithmetic operations over a set of integers partitioned over a network require sophisticated algorithms. As a part of our earlier work, we have developed a secure protocol for computing dot products of two vectors. In this paper,we present a secure protocol for Yao’s millionaires’ problem. In this problem, each of the two participating parties have a number and the objective is to determine whose number is larger without disclosing any information about the numbers. This problem has direct applications in on-line bidding and auctions. Furthermore, combined with a secure dot-product, a solution to this secure multiparty computation provides necessary building blocks for such basic operations as frequent item-set generation in association rule mining. Although an asymptotically optimal solution for the secure multiparty computation of the ‘less-or-equal’ predicate exists in literature, this protocol is not suited for practical applications. Here, we present a protocol which has a much simpler structure and is more efficient for numbers in ranges practically encountered in typical ecommerce applications. Furthermore, advances in cryptanalysis and the subsequent increase in key lengths for public-key cryptographic systems accentuate the advantage of the proposed protocol. We present experimental evidence demonstrating the efficiency of the proposed protocol both in terms of time and communication overhead.",
"title": ""
}
] |
[
{
"docid": "a827d89c56521de7dff8a59039c52181",
"text": "A set of tools is being prepared in the frame of ESA activity [18191/04/NL] labelled: \"Mars Rover Chassis Evaluation Tools\" to support design, selection and optimisation of space exploration rovers in Europe. This activity is carried out jointly by Contraves Space as prime contractor, EPFL, DLR, Surrey Space Centre and EADS Space Transportation. This paper describes the current results of this study and its intended used for selection, design and optimisation on different wheeled vehicles. These tools would also allow future developments for a more efficient motion control on rover. INTRODUCTION AND MOTIVATION A set of tools is being developed to support the design of planetary rovers in Europe. The RCET will enable accurate predictions and characterisations of rover performances as related to the locomotion subsystem. This infrastructure consists of both S/W and H/W elements that will be interwoven to result in a user-friendly environment. The actual need for mobility increased in terms of range and duration. In this respect, redesigning specific aspects of the past rover concepts, in particular the development of most suitable all terrain performances is appropriate [9]. Analysis and design methodologies for terrestrial surface vehicles to operate on unprepared surfaces have been successfully applied to planet rover developments for the first time during the Apollo LRV manned lunar rover programme of the late 1960’s and early 1970’s [1,2]. Key to this accomplishment and to rational surface vehicle designs in general are quantitative descriptions of the terrain and of the interaction between the terrain and the vehicle. Not only the wheel/ground interaction is essential for efficient locomotion, but also the rover kinematics concepts. In recent terrestrial off-the-road vehicle development and acquisition, especially in the military, the so-called ‘Virtual Proving Ground’ (VPG) Simulation Technology has become essential. The integrated environments previously available to design engineers involved sophisticated hardware and software and cost hundreds of thousands of Euros. The experimentation and operational costs associated with the use of such instruments were even more alarming. The promise of VPG is to lower the risk and cost in vehicle definition and design by allowing early concept characterisation and trade-off’s based on numerical models without having to rely on prototyping for concept assessment. A similar approach is proposed for future European planetary rover programmes and is to be enabled by RCET. The first part of this paper describes the methodology used in the RCET activity and gives an overview of the different tools under development. The next section details the theory and modules used for the simulation. Finally the last section relates the first results, the future work and concludes this paper. In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2 4, 2004",
"title": ""
},
{
"docid": "9b62633b700a275ae25dd49bc1e459a0",
"text": "We describe a new supervised machine learning approach for detecting authorship deception, a specific type of authorship attribution task particularly relevant for cybercrime forensic investigations, and demonstrate its validity on two case studies drawn from realistic online data sets. The core of our approach involves identifying uncharacteristic behavior for an author, based on a writeprint extracted from unstructured text samples of the author’s writing. The writeprints used here involve stylometric features and content features derived from topic models, an unsupervised approach for identifying relevant keywords that relate to the content areas of a document. One innovation of our approach is to transform the writeprint feature values into a representation that individually balances characteristic and uncharacteristic traits of an author, and we subsequently apply a Sparse Multinomial Logistic Regression classifier to this novel representation. Our method yields high accuracy for authorship deception detection on the two case studies, confirming its utility. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "e1b9795030dac51172c20a49113fac23",
"text": "Bin packing problems are a class of optimization problems that have numerous applications in the industrial world, ranging from efficient cutting of material to packing various items in a larger container. We consider here only rectangular items cut off an infinite strip of material as well as off larger sheets of fixed dimensions. This problem has been around for many years and a great number of publications can be found on the subject. Nevertheless, it is often difficult to reconcile a theoretical paper and practical application of it. The present work aims to create simple but, at the same time, fast and efficient algorithms, which would allow one to write high-speed and capable software that can be used in a real-time application.",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "c5bbdfc0da1635ad0a007e60e224962f",
"text": "Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent. In this paper we critically analyze this method and its properties, and show how it can be viewed as a type of approximate 2nd-order optimization method, where the Fisher information matrix used to compute the natural gradient direction can be viewed as an approximation of the Hessian. This perspective turns out to have significant implications for how to design a practical and robust version of the method. Among our various other contributions is a thorough analysis of the convergence speed of natural gradient descent and more general stochastic methods, a critical examination of the oft-used “empirical” approximation of the Fisher matrix, and an analysis of the (approximate) parameterization invariance property possessed by the method, which we show still holds for certain other choices of the curvature matrix, but notably not the Hessian. ∗jmartens@cs.toronto.edu 1 ar X iv :1 41 2. 11 93 v5 [ cs .L G ] 1 O ct 2 01 5",
"title": ""
},
{
"docid": "db7a4ab8d233119806e7edf2a34fffd1",
"text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.",
"title": ""
},
{
"docid": "ba29af46fd410829c450eed631aa9280",
"text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.",
"title": ""
},
{
"docid": "7a6876aa158c9bc717bd77319f4d2494",
"text": "Scripts encode knowledge of prototypical sequences of events. We describe a Recurrent Neural Network model for statistical script learning using Long Short-Term Memory, an architecture which has been demonstrated to work well on a range of Artificial Intelligence tasks. We evaluate our system on two tasks, inferring held-out events from text and inferring novel events from text, substantially outperforming prior approaches on both tasks.",
"title": ""
},
{
"docid": "1eaad8b6a2bde878373f37fe7e67b48c",
"text": "Speech separation can be formulated as a classification problem. In classification-based speech separation, supervised learning is employed to classify time-frequency units as either speech-dominant or noise-dominant. In very low signal-to-noise ratio (SNR) conditions, acoustic features extracted from a mixture are crucial for correct classification. In this study, we systematically evaluate a range of promising features for classification-based separation using six nonstationary noises at the low SNR level of -5 dB, which is chosen with the goal of improving human speech intelligibility in mind. In addition, we propose a new feature called multi-resolution cochleagram (MRCG). The new feature is constructed by combining four cochleagrams at different spectrotemporal resolutions in order to capture both the local and contextual information. Experimental results show that MRCG gives the best classification results among all evaluated features. In addition, our results indicate that auto-regressive moving average (ARMA) filtering, a post-processing technique for improving automatic speech recognition features, also improves many acoustic features for speech separation.",
"title": ""
},
{
"docid": "ea1f836ba53e49663d5b7f480a2f8772",
"text": "Strengths and weaknesses of modern widebandwidth bipolar transistor operational amplifiers are investigated and compared with respect to bandwidth, slew rate, noise, distortion, and power. This paper traces the evolution of operational amplifier designs since vacuum tube days to give a perspective of the large number of circuit variations used over time. Of particular value is the ability to use many of these circuit design options as the basis of new amplifiers. In addition, an array of operational amplifier components fabricated on the AT&T CBIC V2 [1] process is described. This design incorporates many of the architectural techniques that Vin have evolved over the years to produce four separate operational amplifier on a single base wafer. The process design methodology requires identifying the common elements in each architecture and the minimum number of additional components required to implement four unique architectures on the array. +V",
"title": ""
},
{
"docid": "645e69205aea3887d954f825306a1052",
"text": "Continuous outlier detection in data streams has important applications in fraud detection, network security, and public health. The arrival and departure of data objects in a streaming manner impose new challenges for outlier detection algorithms, especially in time and space efficiency. In the past decade, several studies have been performed to address the problem of distance-based outlier detection in data streams (DODDS), which adopts an unsupervised definition and does not have any distributional assumptions on data values. Our work is motivated by the lack of comparative evaluation among the state-of-the-art algorithms using the same datasets on the same platform. We systematically evaluate the most recent algorithms for DODDS under various stream settings and outlier rates. Our extensive results show that in most settings, the MCOD algorithm offers the superior performance among all the algorithms, including the most recent algorithm Thresh LEAP.",
"title": ""
},
{
"docid": "a0d6536cd8c85fe87cb316f92b489d32",
"text": "As a design of information-centric network architecture, Named Data Networking (NDN) provides content-based security. The signature binding the name with the content is the key point of content-based security in NDN. However, signing a content will introduce a significant computation overhead, especially for dynamically generated content. Adversaries can take advantages of such computation overhead to deplete the resources of the content provider. In this paper, we propose Interest Cash, an application-based countermeasure against Interest Flooding for dynamic content. Interest Cash requires a content consumer to solve a puzzle before it sends an Interest. The content consumer should provide a solution to this puzzle as cash to get the signing service from the content provider. The experiment shows that an adversary has to use more than 300 times computation resources of the content provider to commit a successful attack when Interest Cash is used.",
"title": ""
},
{
"docid": "3a2740b7f65841f7eb4f74a1fb3c9b65",
"text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.",
"title": ""
},
{
"docid": "4138f62dfaefe49dd974379561fb6fea",
"text": "For a set of 1D vectors, standard singular value decomposition (SVD) is frequently applied. For a set of 2D objects such as images or weather maps, we form 2DSVD, which computes principal eigenvectors of rowrow and column-column covariance matrices, exactly as in the standard SVD. We study optimality properties of 2DSVD as low-rank approximation and show that it provides a framework unifying two recent approaches. Experiments on images and weather maps illustrate the usefulness of 2DSVD.",
"title": ""
},
{
"docid": "41c3505d1341247972d99319cba3e7ba",
"text": "A 32-year-old pregnant woman in the 25th week of pregnancy underwent oral glucose tolerance screening at the diabetologist's. Later that day, she was found dead in her apartment possibly poisoned with Chlumsky disinfectant solution (solutio phenoli camphorata). An autopsy revealed chemical burns in the digestive system. The lungs and the brain showed signs of severe edema. The blood of the woman and fetus was analyzed using gas chromatography with mass spectrometry and revealed phenol, its metabolites (phenyl glucuronide and phenyl sulfate) and camphor. No ethanol was found in the blood samples. Both phenol and camphor are contained in Chlumsky disinfectant solution, which is used for disinfecting surgical equipment in healthcare facilities. Further investigation revealed that the deceased woman had been accidentally administered a disinfectant instead of a glucose solution by the nurse, which resulted in acute intoxication followed by the death of the pregnant woman and the fetus.",
"title": ""
},
{
"docid": "8fa61b7d1844eee81d1e02b12b654b16",
"text": "Time series are ubiquitous, and a measure to assess their similarity is a core part of many computational systems. In particular, the similarity measure is the most essential ingredient of time series clustering and classification systems. Because of this importance, countless approaches to estimate time series similarity have been proposed. However, there is a lack of comparative studies using empirical, rigorous, quantitative, and large-scale assessment strategies. In this article, we provide an extensive evaluation of similarity measures for time series classification following the aforementioned principles. We consider 7 different measures coming from alternative measure ‘families’, and 45 publicly-available time series data sets coming from a wide variety of scientific domains. We focus on out-of-sample classification accuracy, but in-sample accuracies and parameter choices are also discussed. Our work is based on rigorous evaluation methodologies and includes the use of powerful statistical significance tests to derive meaningful conclusions. The obtained results show the equivalence, in terms of accuracy, of a number of measures, but with one single candidate outperforming the rest. Such findings, together with the followed methodology, invite researchers on the field to adopt a more consistent evaluation criteria and a more informed decision regarding the baseline measures to which new developments should be compared.",
"title": ""
},
{
"docid": "e983898bf746ecb5ea8590f3d3beb337",
"text": "The concept of Bitcoin was first introduced by an unknown individual (or a group of people) named Satoshi Nakamoto before it was released as open-source software in 2009. Bitcoin is a peer-to-peer cryptocurrency and a decentralized worldwide payment system for digital currency where transactions take place among users without any intermediary. Bitcoin transactions are performed and verified by network nodes and then registered in a public ledger called blockchain, which is maintained by network entities running Bitcoin software. To date, this cryptocurrency is worth close to U.S. $150 billion and widely traded across the world. However, as Bitcoin’s popularity grows, many security concerns are coming to the forefront. Overall, Bitcoin security inevitably depends upon the distributed protocols-based stimulant-compatible proof-of-work that is being run by network entities called miners, who are anticipated to primarily maintain the blockchain (ledger). As a result, many researchers are exploring new threats to the entire system, introducing new countermeasures, and therefore anticipating new security trends. In this survey paper, we conduct an intensive study that explores key security concerns. We first start by presenting a global overview of the Bitcoin protocol as well as its major components. Next, we detail the existing threats and weaknesses of the Bitcoin system and its main technologies including the blockchain protocol. Last, we discuss current existing security studies and solutions and summarize open research challenges and trends for future research in Bitcoin security.",
"title": ""
},
{
"docid": "fcd98a7540dd59e74ea71b589c255adb",
"text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.",
"title": ""
},
{
"docid": "038c10660f6dcd354dd54027bd9e65eb",
"text": "A new architecture for a very fast and secure public key crypto-coprocessor Crypto@1408Bit usable in Smart Card ICs is presented. The key elements of Crypto@1408Bit architecture are a very fast Look Ahead Algorithm for modular multiplication, a very fast and secure serial-parallel adder, a fast and chip area efficient carry handling and a sufficient number of working registers enabling easy programming. With this architecture a new dimension of crypto performance and security against side channel attacks is achieved. Compared to crypto-coprocessors currently available on the Smart Card IC market Crypto@1408Bit offers a performance more than an order of magnitude faster. The security of the crypto-coprocessor relies on hardware and software security features like dual-rail-security logic against differential power attacks, high secure registers for critical operands and an register length with up to 128 Bit buffer for randomization of operands.",
"title": ""
},
{
"docid": "9a3f49d9c8ac513124e75b59f5547a78",
"text": "359 Abstract— Goniometry has been widely used to analyze human motion. The goniometer is a tool to measure the angular change on systems of a single degree of freedom. However, it is inappropriate to detect movements with multiple degrees of freedom. Kinovea is a free software application for the analysis, comparison and evaluation of movement. Generally, used to evaluate the progress of an athlete in training. Many studies in the literature have proposed solutions for measuring combined movements, especially in lower limbs. In this paper, we discuss the possibility to use Kinovea in rehabilitation movements for lower limbs. We used a webcam to record the movement of patient's leg. The detection and analysis was carry out using Kinovea with position markers to measure angular positions of lower limbs. To find the angle of the hip and knee, a mathematical model based on a robot of two degrees of freedom was proposed. The results of position, velocity and acceleration for ankle and knee was presented in a XY plane. In addition, the angular measure of hip and knee was obtained using the inverse kinematics of a 2RR robot.",
"title": ""
}
] |
scidocsrr
|
c7324bf9c0aba75b8812869ace2e6518
|
Online Semi-Supervised Learning with Deep Hybrid Boltzmann Machines and Denoising Autoencoders
|
[
{
"docid": "b408788cd974438f32c1858cda9ff910",
"text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.",
"title": ""
},
{
"docid": "ef04d580d7c1ab165335145c13a1701f",
"text": "Finding good representations of text documents is crucial in information retrieval and classification systems. Today the most popular document representation is based on a vector of word counts in the document. This representation neither captures dependencies between related words, nor handles synonyms or polysemous words. In this paper, we propose an algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network. The model can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible. We show that it is advantageous to exploit even a few labeled samples during training.",
"title": ""
},
{
"docid": "6eeeb343309fc24326ed42b62d5524b1",
"text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"title": ""
}
] |
[
{
"docid": "381ce2a247bfef93c67a3c3937a29b5a",
"text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "2292c60d69c94f31c2831c2f21c327d8",
"text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.",
"title": ""
},
{
"docid": "cd4e04370b1e8b1f190a3533c3f4afe2",
"text": "Perception of depth is a central problem m machine vision. Stereo is an attractive technique for depth perception because, compared with monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements, and unlike \"active\" approaches such as radar and laser ranging, it is suitable in almost all application domains. Computational stereo is broadly defined as the recovery of the three-dimensional characteristics of a scene from multiple images taken from different points of view. First, each of the functional components of the computational stereo paradigm--image acquLsition, camera modeling, feature acquisition, image matching, depth determination, and interpolation--is identified and discussed. Then, the criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented. Finally a representative sampling of computational stereo research is provided.",
"title": ""
},
{
"docid": "548525974665303b813b1614dd39350c",
"text": "We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder-decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.",
"title": ""
},
{
"docid": "313c68843b2521d553772dd024eec202",
"text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "20ce6bde3c15b63cad0a421282dbcdc6",
"text": "Baseline detection is still a challenging task for heterogeneous collections of historical documents. We present a novel approach to baseline extraction in such settings, turning out the winning entry to the ICDAR 2017 Competition on Baseline detection (cBAD). It utilizes deep convolutional nets (CNNs) for both, the actual extraction of baselines, as well as for a simple form of layout analysis in a pre-processing step. To the best of our knowledge it is the first CNN-based system for baseline extraction applying a U-net architecture and sliding window detection, profiting from a high local accuracy of the candidate lines extracted. Final baseline post-processing complements our approach, compensating for inaccuracies mainly due to missing context information during sliding window detection. We experimentally evaluate the components of our system individually on the cBAD dataset. Moreover, we investigate how it generalizes to different data by means of the dataset used for the baseline extraction task of the ICDAR 2017 Competition on Layout Analysis for Challenging Medieval Manuscripts (HisDoc). A comparison with the results reported for HisDoc shows that it also outperforms the contestants of the latter.",
"title": ""
},
{
"docid": "c2c0ed74c63c479d772a743a167c18b3",
"text": "Neural networks has been successfully used in the processing of Lidar data, especially in the scenario of autonomous driving. However, existing methods heavily rely on pre-processing of the pulse signals derived from Lidar sensors and therefore result in high computational overhead and considerable latency. In this paper, we proposed an approach utilizing Spiking Neural Network (SNN) to address the object recognition problem directly with raw temporal pulses. To help with the evaluation and benchmarking, a comprehensive temporal pulses data-set was created to simulate Lidar reflection in different road scenarios. Being tested with regard to recognition accuracy and time efficiency under different noise conditions, our proposed method shows remarkable performance with the inference accuracy up to 99.83% (with 10% noise) and the average recognition delay as low as 265 ns. It highlights the potential of SNN in autonomous driving and some related applications. In particular, to our best knowledge, this is the first attempt to use SNN to directly perform object recognition on raw Lidar temporal pulses.",
"title": ""
},
{
"docid": "3d47cbee5b76ea68a12f6e026fbc2abf",
"text": "This paper presents the first realtime 3D eye gaze capture method that simultaneously captures the coordinated movement of 3D eye gaze, head poses and facial expression deformation using a single RGB camera. Our key idea is to complement a realtime 3D facial performance capture system with an efficient 3D eye gaze tracker. We start the process by automatically detecting important 2D facial features for each frame. The detected facial features are then used to reconstruct 3D head poses and large-scale facial deformation using multi-linear expression deformation models. Next, we introduce a novel user-independent classification method for extracting iris and pupil pixels in each frame. We formulate the 3D eye gaze tracker in the Maximum A Posterior (MAP) framework, which sequentially infers the most probable state of 3D eye gaze at each frame. The eye gaze tracker could fail when eye blinking occurs. We further introduce an efficient eye close detector to improve the robustness and accuracy of the eye gaze tracker. We have tested our system on both live video streams and the Internet videos, demonstrating its accuracy and robustness under a variety of uncontrolled lighting conditions and overcoming significant differences of races, genders, shapes, poses and expressions across individuals.",
"title": ""
},
{
"docid": "98e0f92258df3caf516e257fa40e96b0",
"text": "In this paper, we introduce individualness of detection candidates as a complement to objectness for pedestrian detection. The individualness assigns a single detection for each object out of raw detection candidates given by either object proposals or sliding windows. We show that conventional approaches, such as non-maximum suppression, are sub-optimal since they suppress nearby detections using only detection scores. We use a determinantal point process combined with the individualness to optimally select final detections. It models each detection using its quality and similarity to other detections based on the individualness. Then, detections with high detection scores and low correlations are selected by measuring their probability using a determinant of a matrix, which is composed of quality terms on the diagonal entries and similarities on the off-diagonal entries. For concreteness, we focus on the pedestrian detection problem as it is one of the most challenging problems due to frequent occlusions and unpredictable human motions. Experimental results demonstrate that the proposed algorithm works favorably against existing methods, including non-maximal suppression and a quadratic unconstrained binary optimization based method.",
"title": ""
},
{
"docid": "e42ed44464fa4df2514e7560da2eb837",
"text": "The combination of the compactness of networks, featuring small diameters, and their complex architectures results in a variety of critical effects dramatically different from those in cooperative systems on lattices. In the last few years, researchers have made important steps toward understanding the qualitatively new critical phenomena in complex networks. We review the results, concepts, and methods of this rapidly developing field. Here we mostly consider two closely related classes of these critical phenomena, namely structural phase transitions in the network architectures and transitions in cooperative models on networks as substrates. We also discuss systems where a network and interacting agents on it influence each other. We overview a wide range of critical phenomena in equilibrium and growing networks including the birth of the giant connected component, percolation, k-core percolation, phenomena near epidemic thresholds, condensation transitions, critical phenomena in spin models placed on networks, synchronization, and self-organized criticality effects in interacting systems on networks. We also discuss strong finite size effects in these systems and highlight open problems and perspectives.",
"title": ""
},
{
"docid": "7643347a62e8835b5cc4b1b432f504c1",
"text": "Simulation systems have become an essential component in the development and validation of autonomous driving technologies. The prevailing state-of-the-art approach for simulation is to use game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (e.g., the assets for simulation) remains a manual task that can be costly and time-consuming. In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance. In this paper we present a novel approach to address these issues: Augmented Autonomous Driving Simulation (AADS). Our formulation augments real-world pictures with a simulated traffic flow to create photo-realistic simulation images and renderings. More specifically, we use LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generate highly plausible traffic flows for cars and pedestrians and compose them into the background. The composite images can be re-synthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photo-realistic, fully annotated, and ready for end-to-end training and testing of autonomous driving systems from perception to planning. We explain our system design and validate our algorithms with a number of autonomous driving tasks from detection to segmentation and predictions. Compared to traditional approaches, our method offers unmatched scalability and realism. Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility in a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.",
"title": ""
},
{
"docid": "5a4aa3f4ff68fab80d7809ff04a25a3b",
"text": "OBJECTIVE\nThe technique of short segment pedicle screw fixation (SSPSF) has been widely used for stabilization in thoracolumbar burst fractures (TLBFs), but some studies reported high rate of kyphosis recurrence or hardware failure. This study was to evaluate the results of SSPSF including fractured level and to find the risk factors concerned with the kyphosis recurrence in TLBFs.\n\n\nMETHODS\nThis study included 42 patients, including 25 males and 17 females, who underwent SSPSF for stabilization of TLBFs between January 2003 and December 2010. For radiologic assessments, Cobb angle (CA), vertebral wedge angle (VWA), vertebral body compression ratio (VBCR), and difference between VWA and Cobb angle (DbVC) were measured. The relationships between kyphosis recurrence and radiologic parameters or demographic features were investigated. Frankel classification and low back outcome score (LBOS) were used for assessment of clinical outcomes.\n\n\nRESULTS\nThe mean follow-up period was 38.6 months. CA, VWA, and VBCR were improved after SSPSF, and these parameters were well maintained at the final follow-up with minimal degree of correction loss. Kyphosis recurrence showed a significant increase in patients with Denis burst type A, load-sharing classification (LSC) score >6 or DbVC >6 (p<0.05). There were no patients who worsened to clinical outcome, and there was no significant correlation between kyphosis recurrence and clinical outcome in this series.\n\n\nCONCLUSION\nSSPSF including the fractured vertebra is an effective surgical method for restoration and maintenance of vertebral column stability in TLBFs. However, kyphosis recurrence was significantly associated with Denis burst type A fracture, LSC score >6, or DbVC >6.",
"title": ""
},
{
"docid": "c536e79078d7d5778895e5ac7f02c95e",
"text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.",
"title": ""
},
{
"docid": "57457909ea5fbee78eccc36c02464942",
"text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.",
"title": ""
},
{
"docid": "e5bbf88eedf547551d28a731bd4ebed7",
"text": "We conduct an empirical study to test the ability of convolutional neural networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.",
"title": ""
},
{
"docid": "16bfb378b82af79cdb8d82d8e152303a",
"text": "Efficient methods for storing and querying are critical for scaling high-order m-gram language models to large corpora. We propose a language model based on compressed suffix trees, a representation that is highly compact and can be easily held in memory, while supporting queries needed in computing language model probabilities on-the-fly. We present several optimisations which improve query runtimes up to 2500×, despite only incurring a modest increase in construction time and memory usage. For large corpora and high Markov orders, our method is highly competitive with the state-of-the-art KenLM package. It imposes much lower memory requirements, often by orders of magnitude, and has runtimes that are either similar (for training) or comparable (for querying).",
"title": ""
},
{
"docid": "9b1f40687d0c9b78efdf6d1e19769294",
"text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.",
"title": ""
}
] |
scidocsrr
|
0f9f814d6a81c6c21c47232ab752276e
|
A Study of Machine Learning in Wireless Sensor Network
|
[
{
"docid": "3f24525276e36ea087a04cb79ee25a95",
"text": "We consider the problem of estimating the geographic locations of nodes in a wireless sensor network where most sensors are without an effective self-positioning functionality. We propose LSVM-a novel solution with the following merits. First, LSVM localizes the network based on mere connectivity information (that is, hop counts only) and therefore is simple and does not require specialized ranging hardware or assisting mobile devices as in most existing techniques. Second, LSVM is based on Support Vector Machine (SVM) learning. Although SVM is a classification method, we show its applicability to the localization problem and prove that the localization error can be upper bounded by any small threshold given an appropriate training data size. Third, LSVM addresses the border and coverage-hole problems effectively. Last but not least, LSVM offers fast localization in a distributed manner with efficient use of processing and communication resources. We also propose a modified version of mass-spring optimization to further improve the location estimation in LSVM. The promising performance of LSVM is exhibited by our simulation study.",
"title": ""
}
] |
[
{
"docid": "c4e92e313fbad1299340c76902b5ef35",
"text": "This paper presents the simple and inexpensive method to implement a square-root extractor for voltage input signal. The proposed extractor is based on the use of two operational amplifiers (op amps) as only active elements. The proposed technique employs the op amp supply-current sensing to achieve an inherently quadratic characteristic. The low-output distortion in output signal can be achieved. Experimental results verifying the characteristic of the proposed circuit are also included.",
"title": ""
},
{
"docid": "fbaf790dd8a59516bc4d1734021400fd",
"text": "With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech. A combination of these methods is also effective against Google Perspective - a cutting-edge solution from industry. Our experiments demonstrate that adversarial training does not completely mitigate the attacks, and using character-level features makes the models systematically more attack-resistant than using word-level features.",
"title": ""
},
{
"docid": "c582742c9e2b5b3d49a83819681f2728",
"text": "Inferring topological and geometrical information from data can offer an alternative perspective on machine learning problems. Methods from topological data analysis, e.g., persistent homology, enable us to obtain such information, typically in the form of summary representations of topological features. However, such topological signatures often come with an unusual structure (e.g., multisets of intervals) that is highly impractical for most machine learning techniques. While many strategies have been proposed to map these topological signatures into machine learning compatible representations, they suffer from being agnostic to the target learning task. In contrast, we propose a technique that enables us to input topological signatures to deep neural networks and learn a task-optimal representation during training. Our approach is realized as a novel input layer with favorable theoretical properties. Classification experiments on 2D object shapes and social network graphs demonstrate the versatility of the approach and, in case of the latter, we even outperform the state-of-the-art by a large margin.",
"title": ""
},
{
"docid": "c57a689627f1af0bf872e4d0c5953a28",
"text": "Image diffusion plays a fundamental role for the task of image denoising. The recently proposed trainable nonlinear reaction diffusion (TNRD) model defines a simple but very effective framework for image denoising. However, as the TNRD model is a local model, whose diffusion behavior is purely controlled by information of local patches, it is prone to create artifacts in the homogenous regions and over-smooth highly textured regions, especially in the case of strong noise levels. Meanwhile, it is widely known that the non-local self-similarity (NSS) prior stands as an effective image prior for image denoising, which has been widely exploited in many non-local methods. In this work, we are highly motivated to embed the NSS prior into the TNRD model to tackle its weaknesses. In order to preserve the expected property that end-to-end training remains available, we exploit the NSS prior by defining a set of non-local filters, and derive our proposed trainable non-local reaction diffusion (TNLRD) model for image denoising. Together with the local filters and influence functions, the non-local filters are learned by employing loss-specific training. The experimental results show that the trained TNLRD model produces visually plausible recovered images with more textures and less artifacts, compared to its local versions. Moreover, the trained TNLRD model can achieve strongly competitive performance to recent state-of-the-art image denoising methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM).",
"title": ""
},
{
"docid": "0ae071bc719fdaac34a59991e66ab2b8",
"text": "It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.",
"title": ""
},
{
"docid": "042671fbf6cc2f1d87823673b559565b",
"text": "We present a novel system for the real-time detection and recognition of traffic symbols. Candidate regions are detected as Maximally Stable Extremal Regions (MSER) from which Histogram of Oriented Gradients (HOG) features are derived, and recognition is then performed using Random Forests. The training data comprises a set of synthetically generated images, created by applying randomised distortions to graphical template images taken from an on-line database. This approach eliminates the need for real training images and makes it easy to include all possible signs. Our proposed method can operate under a range of weather conditions at an average speed of 20 fps and is accurate even at high vehicle speeds. Comprehensive comparative results are provided to illustrate the performance of the system.",
"title": ""
},
{
"docid": "77411fad048151144ce65b804957b4ed",
"text": "We introduce a new design for the visual analysis of eye tracking data recorded from dynamic stimuli such as video. ISeeCube includes multiple coordinated views to support different aspects of various analysis tasks. It combines methods for the spatiotemporal analysis of gaze data recorded from unlabeled videos as well as the possibility to annotate and investigate dynamic Areas of Interest (AOIs). A static overview of the complete data set is provided by a space-time cube visualization that shows gaze points with density-based color mapping and spatiotemporal clustering of the data. A timeline visualization supports the analysis of dynamic AOIs and the viewers' attention on them. AOI-based scanpaths of different viewers can be clustered by their Levenshtein distance, an attention map, or the transitions between AOIs. With the provided visual analytics techniques, the exploration of eye tracking data recorded from several viewers is supported for a wide range of analysis tasks.",
"title": ""
},
{
"docid": "4261755b137a5cde3d9f33c82bc53cd7",
"text": "We study the problem of automatically extracting information networks formed by recognizable entities as well as relations among them from social media sites. Our approach consists of using state-of-the-art natural language processing tools to identify entities and extract sentences that relate such entities, followed by using text-clustering algorithms to identify the relations within the information network. We propose a new term-weighting scheme that significantly improves on the state-of-the-art in the task of relation extraction, both when used in conjunction with the standard tf ċ idf scheme and also when used as a pruning filter. We describe an effective method for identifying benchmarks for open information extraction that relies on a curated online database that is comparable to the hand-crafted evaluation datasets in the literature. From this benchmark, we derive a much larger dataset which mimics realistic conditions for the task of open information extraction. We report on extensive experiments on both datasets, which not only shed light on the accuracy levels achieved by state-of-the-art open information extraction tools, but also on how to tune such tools for better results.",
"title": ""
},
{
"docid": "42f9b18fc2b01ee267847cc762eae0d0",
"text": "In this paper, we point out that SRM (Spatial-domain Rich Model), the most successful steganalysis framework of digital images possesses a similar architecture to CNN (convolutional neural network). The reasonable expectation is that the steganalysis performance of a well-trained CNN should be comparable to or even better than that of the hand-coded SRM. However, a CNN without pre-training always get stuck at local plateaus or even diverge which result in rather poor solutions. In order to circumvent this obstacle, we use convolutional auto-encoder in the pre-training procedure. A stack of convolutional auto-encoders forms a CNN. The experimental results show that initializing a CNN with the mixture of the filters from a trained stack of convolutional auto-encoders and feature pooling layers, although still can not compete with SRM, yields superior performance compared to traditional CNN for the detection of HUGO generated stego images in BOSSBase image database.",
"title": ""
},
{
"docid": "b59281f7deb759c5126687ab8df13527",
"text": "Despite orthogeriatric management, 12% of the elderly experienced PUs after hip fracture surgery. PUs were significantly associated with a low albumin level, history of atrial fibrillation coronary artery disease, and diabetes. The risk ratio of death at 6 months associated with pressure ulcer was 2.38 (95% CI 1.31-4.32%, p = 0.044).\n\n\nINTRODUCTION\nPressure ulcers in hip fracture patients are frequent and associated with a poor outcome. An orthogeriatric management, recommended by international guidelines in hip fracture patients and including pressure ulcer prevention and treatment, could influence causes and consequences of pressure ulcer. However, remaining factors associated with pressure ulcer occurrence and prognostic value of pressure ulcer in hip fracture patients managed in an orthogeriatric care pathway remain unknown.\n\n\nMETHODS\nFrom June 2009 to April 2015, all consecutive patients with hip fracture admitted to a unit for Post-operative geriatric care were evaluated for eligibility. Patients were included if their primary presentation was due to hip fracture and if they were ≥ 70 years of age. Patients were excluded in the presence of pathological fracture or if they were already hospitalized at the time of the fracture. In our unit, orthogeriatric principles are implemented, including a multi-component intervention to improve pressure ulcer prevention and management. Patients were followed-up until 6 months after discharge.\n\n\nRESULTS\nFive hundred sixty-seven patients were included, with an overall 14.4% 6-month mortality (95% CI 11.6-17.8%). Of these, 67 patients (12%) experienced at least one pressure ulcer. Despite orthogeriatric management, pressure ulcers were significantly associated with a low albumin level (RR 0.90, 95% CI 0.84-0.96; p = 0.003) and history of atrial fibrillation (RR 1.91, 95% CI 1.05-3.46; p = 0.033), coronary artery disease (RR 2.16, 95% CI 1.17-3.99; p = 0.014), and diabetes (RR 2.33, 95% CI 1.14-4.75; p = 0.02). A pressure ulcer was associated with 6-month mortality (RR 2.38, 95% CI 1.31-4.32, p = 0.044).\n\n\nCONCLUSION\nIn elderly patients with hip fracture managed in an orthogeriatric care pathway, pressure ulcer remained associated with poorly modifiable risk factors and long-term mortality.",
"title": ""
},
{
"docid": "0bd981ea6d38817b560383f48fdfb729",
"text": "Lightweight wheelchairs are characterized by their low cost and limited range of adjustment. Our study evaluated three different folding lightweight wheelchair models using the American National Standards Institute/Rehabilitation Engineering Society of North America (ANSI/RESNA) standards to see whether quality had improved since the previous data were reported. On the basis of reports of increasing breakdown rates in the community, we hypothesized that the quality of these wheelchairs had declined. Seven of the nine wheelchairs tested failed to pass the multidrum test durability requirements. An average of 194,502 +/- 172,668 equivalent cycles was completed, which is similar to the previous test results and far below the 400,000 minimum required to pass the ANSI/RESNA requirements. This was also significantly worse than the test results for aluminum ultralight folding wheelchairs. Overall, our results uncovered some disturbing issues with these wheelchairs and suggest that manufacturers should put more effort into this category to improve quality. To improve the durability of lightweight wheelchairs, we suggested that stronger regulations be developed that require wheelchairs to be tested by independent and certified test laboratories. We also proposed a wheelchair rating system based on the National Highway Transportation Safety Administration vehicle crash ratings to assist clinicians and end users when comparing the durability of different wheelchairs.",
"title": ""
},
{
"docid": "af105dd5dca0642d119ca20661d5f633",
"text": "This paper derives the forward and inverse kinematics of a humanoid robot. The specific humanoid that the derivation is for is a robot with 27 degrees of freedom but the procedure can be easily applied to other similar humanoid platforms. First, the forward and inverse kinematics are derived for the arms and legs. Then, the kinematics for the torso and the head are solved. Finally, the forward and inverse kinematic solutions for the whole body are derived using the kinematics of arms, legs, torso, and head.",
"title": ""
},
{
"docid": "d21bceffcccb80bb13211a60e82e8a55",
"text": "Organizations can choose from software development methodologies ranging from traditional to agile approaches. Researchers surveyed project managers and other team members about their choice of methodologies. The results indicate that although agile methodologies such as Agile Unified Process and Scrum are more prevalent than 10 years ago, traditional methodologies, including the waterfall model, are still popular. Organizations are also taking a hybrid approach, using multiple methodologies on projects. Furthermore, their choice of methodologies is associated with certain organizational, project, and team characteristics.",
"title": ""
},
{
"docid": "b290b3b9db5e620e8a049ad9cd68346b",
"text": "THE USE OF OBSERVATIONAL RESEARCH METHODS in the field of palliative care is vital to building the evidence base, identifying best practices, and understanding disparities in access to and delivery of palliative care services. As discussed in the introduction to this series, research in palliative care encompasses numerous areas in which the gold standard research design, the randomized controlled trial (RCT), is not appropriate, adequate, or even possible.1,2 The difficulties in conducting RCTs in palliative care include patient and family recruitment, gate-keeping by physicians, crossover contamination, high attrition rates, small sample sizes, and limited survival times. Furthermore, a number of important issues including variation in access to palliative care and disparities in the use and provision of palliative care simply cannot be answered without observational research methods. As research in palliative care broadens to encompass study designs other than the RCT, the collective understanding of the use, strengths, and limitations of observational research methods is critical. The goals of this first paper are to introduce the major types of observational study designs, discuss the issues of precision and validity, and provide practical insights into how to critically evaluate this literature in our field.",
"title": ""
},
{
"docid": "73fefd128d5f454f52fd345814244bad",
"text": "In this paper a spatial interpolation approach, based on polar-grid representation and Kriging predictor, is proposed for 3D point cloud sampling. Discrete grid representation is a widely used technique because of its simplicity and capacity of providing an efficient and compact representation, allowing subsequent applications such as artificial perception and autonomous navigation. Two-dimensional occupancy grid representations have been studied extensively in the past two decades, and recently 2.5D and 3D grid-based approaches dominate current applications. A key challenge in perception systems for vehicular applications is to balance low computational complexity and reliable data interpretation. To this end, this paper contributes with a discrete 2.5D polar-grid that upsamples the input data, ie sparse 3D point cloud, by means of a deformable kriging-based interpolation strategy. Experiments carried out on the KITTI dataset, using data from a LIDAR, demonstrate that the approach proposed in this work allows a proper representation of urban environments.",
"title": ""
},
{
"docid": "37a8fe29046ec94d54e62f202a961129",
"text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.",
"title": ""
},
{
"docid": "42ca37dd78bf8b52da5739ad442f203f",
"text": "Frame interpolation attempts to synthesise intermediate frames given one or more consecutive video frames. In recent years, deep learning approaches, and in particular convolutional neural networks, have succeeded at tackling lowand high-level computer vision problems including frame interpolation. There are two main pursuits in this line of research, namely algorithm efficiency and reconstruction quality. In this paper, we present a multi-scale generative adversarial network for frame interpolation (FIGAN). To maximise the efficiency of our network, we propose a novel multi-scale residual estimation module where the predicted flow and synthesised frame are constructed in a coarse-tofine fashion. To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses. We evaluate the proposed approach using a collection of 60fps videos from YouTube-8m. Our results improve the state-of-the-art accuracy and efficiency, and a subjective visual quality comparable to the best performing interpolation method.",
"title": ""
},
{
"docid": "770c10c7d10abe7705d9ce89e3980485",
"text": "Keywords: Support Vector Machines Machine vision Weed identification Image segmentation Decision making This paper outlines an automatic computer vision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the Support Vector Machines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the Support Vector Machines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.",
"title": ""
},
{
"docid": "de96ac151e5a3a2b38f2fa309862faee",
"text": "Venue recommendation is an important application for Location-Based Social Networks (LBSNs), such as Yelp, and has been extensively studied in recent years. Matrix Factorisation (MF) is a popular Collaborative Filtering (CF) technique that can suggest relevant venues to users based on an assumption that similar users are likely to visit similar venues. In recent years, deep neural networks have been successfully applied to tasks such as speech recognition, computer vision and natural language processing. Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the effectiveness of MF-based approaches by exploiting neural network models such as: word embeddings to incorporate auxiliary information (e.g. textual content of comments); and Recurrent Neural Networks (RNN) to capture sequential properties of observed user-venue interactions. However, such approaches rely on the traditional inner product of the latent factors of users and venues to capture the concept of collaborative filtering, which may not be sufficient to capture the complex structure of user-venue interactions. In this paper, we propose a Deep Recurrent Collaborative Filtering framework (DRCF) with a pairwise ranking function that aims to capture user-venue interactions in a CF manner from sequences of observed feedback by leveraging Multi-Layer Perception and Recurrent Neural Network architectures. Our proposed framework consists of two components: namely Generalised Recurrent Matrix Factorisation (GRMF) and Multi-Level Recurrent Perceptron (MLRP) models. In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors. In addition, we propose a novel sequence-based negative sampling approach that accounts for the sequential properties of observed feedback and geographical location of venues to enhance the quality of venue suggestions, as well as alleviate the cold-start users problem. Experiments on three large checkin and rating datasets show the effectiveness of our proposed framework by outperforming various state-of-the-art approaches.",
"title": ""
},
{
"docid": "84436fc1467a259e0e584da3af6f5ef7",
"text": "BACKGROUND\nMicroRNAs are short regulatory RNAs that negatively modulate protein expression at a post-transcriptional and/or translational level and are deeply involved in the pathogenesis of several types of cancers. Specifically, microRNA-221 (miR-221) is overexpressed in many human cancers, wherein accumulating evidence indicates that it functions as an oncogene. However, the function of miR-221 in human osteosarcoma has not been totally elucidated. In the present study, the effects of miR-221 on osteosarcoma and the possible mechanism by which miR-221 affected the survival, apoptosis, and cisplatin resistance of osteosarcoma were investigated.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nReal-time quantitative PCR analysis revealed miR-221 was significantly upregulated in osteosarcoma cell lines than in osteoblasts. Both human osteosarcoma cell lines SOSP-9607 and MG63 were transfected with miR-221 mimic or inhibitor to regulate miR-221 expression. The effects of miR-221 were then assessed by cell viability, cell cycle analysis, apoptosis assay, and cisplatin resistance assay. In both cells, upregulation of miR-221 induced cell survival and cisplatin resistance and reduced cell apoptosis. In addition, knockdown of miR-221 inhibited cell growth and cisplatin resistance and induced cell apoptosis. Potential target genes of miR-221 were predicted using bioinformatics. Moreover, luciferase reporter assay and western blot confirmed that PTEN was a direct target of miR-221. Furthermore, introduction of PTEN cDNA lacking 3'-UTR or PI3K inhibitor LY294002 abrogated miR-221-induced cisplatin resistance. Finally, both miR-221 and PTEN expression levels in osteosarcoma samples were examined by using real-time quantitative PCR and immunohistochemistry. High miR-221 expression level and inverse correlation between miR-221 and PTEN levels were revealed in osteosarcoma tissues.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results for the first time demonstrate that upregulation of miR-221 induces the malignant phenotype of human osteosarcoma whereas knockdown of miR-221 reverses this phenotype, suggesting that miR-221 could be a potential target for osteosarcoma treatment.",
"title": ""
}
] |
scidocsrr
|
03c6cb2bf955b631f10a06142772e92f
|
Energy adaptive MAC protocol for wireless sensor networks with RF energy transfer
|
[
{
"docid": "33a1f54064bf1d71d44c4f2476e3deea",
"text": "In this paper, two compact patch antenna designs for a new application — outdoor RF energy harvesting in powering a wireless soil sensor network — are presented. The first design is a low-profile folded shorted patch antenna (FSPA), with a small ground plane and wide impedance bandwidth. The second design is a novel FSPA structure with four pairs of slot embedded into its ground plane. Performance of both antennas was first simulated using CST Microwave Studio. Antenna prototypes were then fabricated and tested in the anechoic chamber and in their actual operating environment — an outdoor field. It was found that the FSPA with slotted ground plane achieved a comparable impedance bandwidth to the first design, with an overall size reduction of 29%. Simulations were Corresponding author: Z. W. Sim (zhiwei.sim@postgrad.manchester.ac.uk).",
"title": ""
}
] |
[
{
"docid": "b2f1fca7a05423c06cea45600582520a",
"text": "In Software Abstractions Daniel Jackson introduces an approach tosoftware design that draws on traditional formal methods but exploits automated tools to find flawsas early as possible. This approach--which Jackson calls \"lightweight formal methods\" or\"agile modeling\"--takes from formal specification the idea of a precise and expressivenotation based on a tiny core of simple and robust concepts but replaces conventional analysis basedon theorem proving with a fully automated analysis that gives designers immediate feedback. Jacksonhas developed Alloy, a language that captures the essence of software abstractions simply andsuccinctly, using a minimal toolkit of mathematical notions. This revised edition updates the text,examples, and appendixes to be fully compatible with the latest version of Alloy (Alloy 4).The designer can use automated analysis not only to correct errors but also tomake models that are more precise and elegant. This approach, Jackson says, can rescue designersfrom \"the tarpit of implementation technologies\" and return them to thinking deeply aboutunderlying concepts. Software Abstractions introduces the key elements: a logic,which provides the building blocks of the language; a language, which adds a small amount of syntaxto the logic for structuring descriptions; and an analysis, a form of constraint solving that offersboth simulation (generating sample states and executions) and checking (finding counterexamples toclaimed properties).",
"title": ""
},
{
"docid": "932934a4362bd671427954d0afb61459",
"text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.",
"title": ""
},
{
"docid": "fa260dabc7a58b760b4306e880afb821",
"text": "BACKGROUND\nPerforator-based flaps have been explored across almost all of the lower leg except in the Achilles tendon area. This paper introduced a perforator flap sourced from this area with regard to its anatomic basis and clinical applications.\n\n\nMETHODS\nTwenty-four adult cadaver legs were dissected to investigate the perforators emerging along the lateral edge of the Achilles tendon in terms of number and location relative to the tip of the lateral malleolus, and distribution. Based on the anatomic findings, perforator flaps, based on the perforator(s) of the lateral calcaneal artery (LCA) alone or in concert with the perforator of the peroneal artery (PA), were used for reconstruction of lower-posterior heel defects in eight cases. Postoperatively, subjective assessment and Semmes-Weinstein filament test were performed to evaluate the sensibility of the sural nerve-innerved area.\n\n\nRESULTS\nThe PA ended into the anterior perforating branch and LCA at the level of 6.0 ± 1.4 cm (range 3.3-9.4 cm) above the tip of the lateral malleolus. Both PA and LCA, especially the LCA, gave rise to perforators to contribute to the integument overlying the Achilles tendon. Of eight flaps, six were based on perforator(s) of the LCA and two were on perforators of the PA and LCA. Follow-up lasted for 6-28 months (mean 13.8 months), during which total flap loss and nerve injury were not found. Functional and esthetic outcomes were good in all patients.\n\n\nCONCLUSION\nThe integument overlying the Achilles tendon gets its blood supply through the perforators of the LCA primarily and that of through the PA secondarily. The LCA perforator(s)-based and the LCA plus PA perforators-based stepladder flap is a reliable, sensate flap, and should be thought of as a valuable procedure of choice for coverage of lower-posterior heel defects in selected patients.",
"title": ""
},
{
"docid": "4282aecaa7b517a852677194b8db216e",
"text": "High-level synthesis (HLS) is increasingly popular for the design of high-performance and energy-efficient heterogeneous systems, shortening time-to-market and addressing today's system complexity. HLS allows designers to work at a higher-level of abstraction by using a software program to specify the hardware functionality. Additionally, HLS is particularly interesting for designing field-programmable gate array circuits, where hardware implementations can be easily refined and replaced in the target device. Recent years have seen much activity in the HLS research community, with a plethora of HLS tool offerings, from both industry and academia. All these tools may have different input languages, perform different internal optimizations, and produce results of different quality, even for the very same input description. Hence, it is challenging to compare their performance and understand which is the best for the hardware to be implemented. We present a comprehensive analysis of recent HLS tools, as well as overview the areas of active interest in the HLS research community. We also present a first-published methodology to evaluate different HLS tools. We use our methodology to compare one commercial and three academic tools on a common set of C benchmarks, aiming at performing an in-depth evaluation in terms of performance and the use of resources.",
"title": ""
},
{
"docid": "944dd53232522155103fc2d1578041dd",
"text": "Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model’s estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm’s performance.",
"title": ""
},
{
"docid": "627725bb652abf0412ef7c78d2fb0976",
"text": "In image processing, cartoon character classification, retrieval, and synthesis are critical, so that cartoonists can effectively and efficiently make cartoons by reusing existing cartoon data. To successfully achieve these tasks, it is essential to extract visual features that comprehensively represent cartoon characters and to construct an accurate distance metric to precisely measure the dissimilarities between cartoon characters. In this paper, we introduce three visual features, color histogram, shape context, and skeleton, to characterize the color, shape, and action, respectively, of a cartoon character. These three features are complementary to each other, and each feature set is regarded as a single view. However, it is improper to concatenate these three features into a long vector, because they have different physical properties, and simply concatenating them into a high-dimensional feature vector will suffer from the so-called curse of dimensionality. Hence, we propose a semisupervised multiview distance metric learning (SSM-DML). SSM-DML learns the multiview distance metrics from multiple feature sets and from the labels of unlabeled cartoon characters simultaneously, under the umbrella of graph-based semisupervised learning. SSM-DML discovers complementary characteristics of different feature sets through an alternating optimization-based iterative algorithm. Therefore, SSM-DML can simultaneously accomplish cartoon character classification and dissimilarity measurement. On the basis of SSM-DML, we develop a novel system that composes the modules of multiview cartoon character classification, multiview graph-based cartoon synthesis, and multiview retrieval-based cartoon synthesis. Experimental evaluations based on the three modules suggest the effectiveness of SSM-DML in cartoon applications.",
"title": ""
},
{
"docid": "0eea00a997434c37f7dd7feac62134a3",
"text": "To increase the interest and engagement of middle school students in science and technology, the InterFaces project has created virtual museum guides that are in use at the Museum of Science, Boston. The characters use natural language interaction and have near photoreal appearance to increase and presents reports from museum staff on visitor reaction.",
"title": ""
},
{
"docid": "0db15fca6cab73abf74a8657a7dee1c9",
"text": "Despite the best intentions of disk and RAID manufacturers, on-disk data can still become corrupted. In this paper, we examine the effects of corruption on database management systems. Through injecting faults into the MySQL DBMS, we find that in certain cases, corruption can greatly harm the system, leading to untimely crashes, data loss, or even incorrect results. Overall, of 145 injected faults, 110 lead to serious problems. More detailed observations point us to three deficiencies: MySQL does not have the capability to detect some corruptions due to lack of redundant information, does not isolate corrupted data from valid data, and has inconsistent reactions to similar corruption scenarios. To detect and repair corruption, a DBMS is typically equipped with an offline checker. Unfortunately, the MySQL offline checker is not comprehensive in the checks it performs, misdiagnosing many corruption scenarios and missing others. Sometimes the checker itself crashes; more ominously, its incorrect checking can lead to incorrect repairs. Overall, we find that the checker does not behave correctly in 18 of 145 injected corruptions, and thus can leave the DBMS vulnerable to the problems described above.",
"title": ""
},
{
"docid": "95174e86b668aa218ed657b93c5f5e27",
"text": "Manufacturing small-molecule organic light-emitting diodes (OLEDs) via inkjet printing is rather attractive for realizing high-efficiency and long-life-span devices, yet it is challenging. In this paper, we present our efforts on systematical investigation and optimization of the ink properties and the printing process to enable facile inkjet printing of conjugated light-emitting small molecules. Various factors on influencing the inkjet-printed film quality during the droplet generation, the ink spreading on the substrates, and its solidification processes have been systematically investigated and optimized. Consequently, halogen-free inks have been developed and large-area patterning inkjet printing on flexible substrates with efficient blue emission has been successfully demonstrated. Moreover, OLEDs manufactured by inkjet printing the light-emitting small molecules manifested superior performance as compared with their corresponding spin-cast counterparts.",
"title": ""
},
{
"docid": "a9d93cb2c0d6d76a8597bcd64ecd00ba",
"text": "Hospital-based nurses (N = 832) and doctors (N = 603) in northern and eastern Spain completed a survey of job burnout, areas of work life, and management issues. Analysis of the results provides support for a mediation model of burnout that depicts employees’ energy, involvement, and efficacy as intermediary experiences between their experiences of work life and their evaluations of organizational change. The key element of this model is its focus on employees’ capacity to influence their work environments toward greater conformity with their core values. The model considers 3 aspects of that capacity: decision-making participation, organizational justice, and supervisory relationships. The analysis supports this model and emphasizes a central role for first-line supervisors in employees’ experiences of work life.jasp_563 57..75",
"title": ""
},
{
"docid": "d436517b8dd58d67cee91eb3d2c12b93",
"text": "The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network’s robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with provably minimal perturbation. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced to the ground truths; and also of defense techniques, by measuring the increase in distortion to ground truths in the hardened network versus the original. We use this technique to assess recently suggested attack and defense techniques.",
"title": ""
},
{
"docid": "54f476ed88915f815b60c33aa5dc9a17",
"text": "Datacenters are the cornerstone of the big data infrastructure supporting numerous online services. The demand for interactivity, which significantly impacts user experience and provider revenue, is translated into stringent timing requirements for flows in datacenter networks. Thus low latency networking is becoming a major concern of both industry and academia. We provide a short survey of recent progress made by the networking community for low latency datacenter networks. We propose a taxonomy to categorize existing work based on four main techniques, reducing queue length, accelerating retransmissions, prioritizing mice flows, and exploiting multipath. Then we review select papers, highlight the principal ideas, and discuss their pros and cons. We also present our perspectives of the research challenges and opportunities, hoping to aspire more future work in this space.",
"title": ""
},
{
"docid": "2ddd492da2191f685daa111d5f89eedd",
"text": "Given the abundance of cameras and LCDs in today's environment, there exists an untapped opportunity for using these devices for communication. Specifically, cameras can tune to nearby LCDs and use them for network access. The key feature of these LCD-camera links is that they are highly directional and hence enable a form of interference-free wireless communication. This makes them an attractive technology for dense, high contention scenarios. The main challenge however, to enable such LCD-camera links is to maximize coverage, that is to deliver multiple Mb/s over multi-meter distances, independent of the view angle. To do so, these links need to address unique types of channel distortions, such as perspective distortion and blur.\n This paper explores this novel communication medium and presents PixNet, a system for transmitting information over LCD-camera links. PixNet generalizes the popular OFDM transmission algorithms to address the unique characteristics of the LCD-camera link which include perspective distortion, blur, and sensitivity to ambient light. We have built a prototype of PixNet using off-the-shelf LCDs and cameras. An extensive evaluation shows that a single PixNet link delivers data rates of up to 12 Mb/s at a distance of 10 meters, and works with view angles as wide as 120 degree°.",
"title": ""
},
{
"docid": "724388aac829af9671a90793b1b31197",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
},
{
"docid": "debb1b975738fd0b3db01bbc1b2ff9f3",
"text": "An attempt to solve the collapse problem in the framework of a time-symmetric quantum formalism is reviewed. Although the proposal does not look very attractive, its concept a world defined by two quantum states, one evolving forwards and one evolving backwards in time is found to be useful in modifying the many-worlds picture of Everett’s theory.",
"title": ""
},
{
"docid": "d655222bf22e35471b18135b67326ac5",
"text": "In this paper we approach the robust motion planning problem through the lens of perception-aware planning, whereby we seek a low-cost motion plan subject to a separate constraint on perception localization quality. To solve this problem we introduce the Multiobjective Perception-Aware Planning (MPAP) algorithm which explores the state space via a multiobjective search, considering both cost and a perception heuristic. This perception-heuristic formulation allows us to both capture the history dependence of localization drift and represent complex modern perception methods. The solution trajectory from this heuristic-based search is then certified via Monte Carlo methods to be robust. The additional computational burden of perception-aware planning is offset through massive parallelization on a GPU. Through numerical experiments the algorithm is shown to find robust solutions in about a second. Finally, we demonstrate MPAP on a quadrotor flying perceptionaware and perception-agnostic plans using Google Tango for localization, finding the quadrotor safely executes the perception-aware plan every time, while crashing over 20% of the time on the perception-agnostic due to loss of localization.",
"title": ""
},
{
"docid": "5f6cfe1e3de780c64e7150d7e7347e07",
"text": "The brain acquires the ability of pattern recognition through learning. Understanding neural circuit mechanisms underlying learning and memory is thus essential for understanding how the brain recognizes patterns. Much progress has been made in this area of neuroscience during the past decades. In this lecture, I will summarize three distinct features of neural circuits that provide the basis of learning and memory of neural information, and pattern recognition. First, the architecture of the neural circuits is continuously modified by experience. This process of experience‐induced sculpting (pruning) of connections is most prominent early in development and decreases gradually to a much more limited extent in the adult brain. Second, the efficacy of synaptic transmission could be modified by neural activities associated with experience, in a manner that depends on the pattern (frequency and timing) of spikes in the pre‐ and postsynaptic neurons. This activity‐induced circuit alteration in the form of long‐term potentiation (LTP) and long‐term depression (LTD) of existing synaptic connections is the predominant mechanism underlying learning and memory of the adult brain. Third, learning and memory of information containing multiple modalities, e.g., visual, auditory, and tactile signals, involves processing of each type of signals by different circuits for different modalities, as well as binding of processed multimodal signals through mechanisms that remain to be elucidated. Two potential mechanisms for binding of multimodal signals will be discussed: binding of signals through converging connections to circuits specialized for integration of multimodal signals, and binding of signals through correlated firing of neuronal assemblies that are established in circuits for processing signals of different modalities. Incorporation of these features into artificial neural networks may help to achieve more efficient pattern recognition, especially for recognition of time‐varying multimodal signals. Interactive Granular Computing: Toward Computing Model for Turing Test",
"title": ""
},
{
"docid": "7f47253095756d9640e8286a08ce3b74",
"text": "A speaker’s intentions can be represented by domain actions (domainindependent speech act and domain-dependent concept sequence pairs). Therefore, it is essential that domain actions be determined when implementing dialogue systems because a dialogue system should determine users’ intentions from their utterances and should create counterpart intentions to the users’ intentions. In this paper, a neural network model is proposed for classifying a user’s domain actions and planning a system’s domain actions. An integrated neural network model is proposed for simultaneously determining user and system domain actions using the same framework. The proposed model performed better than previous non-integrated models in an experiment using a goal-oriented dialogue corpus. This result shows that the proposed integration method contributes to improving domain action determination performance. Keywords—Domain Action, Speech Act, Concept Sequence, Neural Network",
"title": ""
},
{
"docid": "5527e47052497e80b0c05c1695cb9a90",
"text": "Due to their high practical impact, Cross-Site Scripting (XSS) attacks have attracted a lot of attention from the security community members. In the same way, a plethora of more or less effective defense techniques have been proposed, addressing the causes and effects of XSS vulnerabilities. NoScript, and disabling scripting code in non-browser applications such as e-mail clients or instant messengers.\n As a result, an adversary often can no longer inject or even execute arbitrary scripting code in several real-life scenarios.\n In this paper, we examine the attack surface that remains after XSS and similar scripting attacks are supposedly mitigated by preventing an attacker from executing JavaScript code. We address the question of whether an attacker really needs JavaScript or similar functionality to perform attacks aiming for information theft. The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files. Through several case studies, we introduce the so called scriptless attacks and demonstrate that an adversary might not need to execute code to preserve his ability to extract sensitive information from well protected websites. More precisely, we show that an attacker can use seemingly benign features to build side channel attacks that measure and exfiltrate almost arbitrary data displayed on a given website.\n We conclude this paper with a discussion of potential mitigation techniques against this class of attacks. In addition, we have implemented a browser patch that enables a website to make a vital determination as to being loaded in a detached view or pop-up window. This approach proves useful for prevention of certain types of attacks we here discuss.",
"title": ""
},
{
"docid": "c13386ba4dc503715dfa81d8d08988fe",
"text": "In this paper the patient flow and perioperative processes involved in day of surgery admissions are considered for a hospital that is undergoing a staged redesign of its operating room. In particular, the day of surgery admission area where patients are prepared for surgery is being relocated and some additional functions for the new unit are being considered. The goal of the simulation study is to map the patient flows and functions of the current area into the newly designed space, to measure potential changes in productivity, and to determine opportunities for future improvements.",
"title": ""
}
] |
scidocsrr
|
5ac568833375eb99ba5784793fa4b492
|
Optimized contrast enhancement for real-time image and video dehazing
|
[
{
"docid": "9323c74e39a677c28d1c082b12e1f587",
"text": "Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Restoring the true scene colors (clear day image) from a single image of a weather-degraded scene remains a challenging task due to the inherent ambiguity between scene albedo and depth. In this paper, we introduce a novel probabilistic method that fully leverages natural statistics of both the albedo and depth of the scene to resolve this ambiguity. Our key idea is to model the image with a factorial Markov random field in which the. scene albedo and depth are. two statistically independent latent layers. We. show that we may exploit natural image and depth statistics as priors on these hidden layers and factorize a single foggy image via a canonical Expectation Maximization algorithm with alternating minimization. Experimental results show that the proposed method achieves more accurate restoration compared to state-of-the-art methods that focus on only recovering scene albedo or depth individually.",
"title": ""
},
{
"docid": "c5427ac777eaa3ecf25cb96a124eddfe",
"text": "One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.",
"title": ""
}
] |
[
{
"docid": "aebfb6cb70de64636647141e6a49d37c",
"text": "Classifying other agents’ intentions is a very complex task but it can be very essential in assisting (autonomous or human) agents in navigating safely in dynamic and possibly hostile environments. This paper introduces a classification approach based on support vector machines and Bayesian filtering (SVM-BF). It then applies it to a road intersection problem to assist a vehicle in detecting the intention of an approaching suspicious vehicle. The SVM-BF approach achieved very promising results.",
"title": ""
},
{
"docid": "69fabbf2e0cc50dbcf28de6cc174159d",
"text": "This paper presents an automatic word sense disambiguation (WSD) system that uses Part-of-Speech (POS) tags along with word classes as the discrete features. Word Classes are derived from the Word Class Assigner using the Word Exchange Algorithm from statistical language processing. Naïve-Bayes classifier is employed from Weka in both the training and testing phases to perform the supervised learning on the standard Senseval-3 data set. Experiments were performing using 10-fold cross-validation on the training set and the training and testing data for training the model and evaluating it. In both experiments, the features will either used separately or combined together to produce the accuracies. Results indicate that word class features did not provide any discrimination for word sense disambiguation. POS tag features produced a small improvement over the baseline. The combination of both word class and POS tag features did not increase the accuracy results. Overall, further study is likely needed to possibly improve the implementation of the word class features in the system.",
"title": ""
},
{
"docid": "9afbaa217155155afdf817b0b3e7db8e",
"text": "Both complex systems methods (such as agentbased modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing 2-week-long curricular units—one using a physical robotics participatory simulation and one using a virtual robotics participatory simulation. We compare the two units for their effectiveness in supporting students’ complex systems thinking and computational thinking skills. We find that while both units improved student outcomes to roughly the same extent, they engendered different perspectives on the content. That is, students using the physical system were more likely to interpret situations from a bottom-up (‘‘agent’’) perspective, and students using the virtual system were more likely to employ a top-down (‘‘aggregate’’) perspective. Our outcomes suggest that the medium of students’ interactions with systems leads to differences in their learning from and about those systems. We explore the reasons for and effects of these differences, challenges in teaching this content, and student learning gains. The paper contributes operationalizable definitions of complex systems perspectives and computational perspectives and provides both a theoretical framework for and empirical evidence of a relationship between those",
"title": ""
},
{
"docid": "54ab143dc18413c58c20612dbae142eb",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "fcf8649ff7c2972e6ef73f837a3d3f4d",
"text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.",
"title": ""
},
{
"docid": "ba6684d3271a53bba41a6c275096d077",
"text": "We design, optimize and demonstrate the behavior of a tendon-driven robotic gripper performing parallel, enveloping and fingertip grasps. The gripper consists of two fingers, each with two links, and is actuated using a single active tendon. During unobstructed closing, the distal links remain parallel, for parallel grasps. If the proximal links are stopped by contact with an object, the distal links start flexing, creating a stable enveloping grasp. We optimize the route of the active flexor tendon and the route and stiffness of a passive extensor tendon in order to achieve this behavior. We show how the resulting gripper can also execute fingertip grasps for picking up small objects off a flat surface, using contact with the surface to its advantage through passive adaptation. Finally, we introduce a method for optimizing the dimensions of the links in order to achieve enveloping grasps of a large range of objects, and apply it to a set of common household objects.",
"title": ""
},
{
"docid": "81fa6a7931b8d5f15d55316a6ed1d854",
"text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.",
"title": ""
},
{
"docid": "917ab22adee174259bef5171fe6f14fb",
"text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.",
"title": ""
},
{
"docid": "27707a845bb3baf7a97cd14e81f8e7f0",
"text": "This paper attempts to identify the importance of sentiment words in financial reports on financial risk. By using a financespecific sentiment lexicon, we apply regression and ranking techniques to analyze the relations between sentiment words and financial risk. The experimental results show that, based on the bag-of-words model, models trained on sentiment words only result in comparable performance to those on origin texts, which confirms the importance of financial sentiment words on risk prediction. Furthermore, the learned models suggest strong correlations between financial sentiment words and risk of companies. As a result, these findings are of great value for providing us more insight and understanding into the impact of financial sentiment words in financial reports.",
"title": ""
},
{
"docid": "95fcfb3a94bf2cf5a1f47b0cf708bd01",
"text": "We launch the new probabilistic model checker Storm. It features the analysis of discrete-and continuous-time variants of both Markov chains and MDPs. It supports the Prism and JANI modeling languages, probabilistic programs, dynamic fault trees and generalized stochastic Petri nets. It has a modular setup in which solvers and symbolic engines can easily be exchanged. It offers a Python API for rapid prototyping by encapsulating Storm's fast and scalable algorithms. Experiments on a variety of benchmarks show its competitive performance.",
"title": ""
},
{
"docid": "a59e56199b81bb741470455c47668a03",
"text": "Cloud-based file synchronization services, such as Dropbox and OneDrive, are a worldwide resource for many millions of users. However, individual services often have tight resource limits, suffer from temporary outages or even shutdowns, and sometimes silently corrupt or leak user data. We design, implement, and evaluate MetaSync, a secure and reliable file synchronization service that uses multiple cloud synchronization services as untrusted storage providers. To make MetaSync work correctly, we devise a novel variant of Paxos that provides efficient and consistent updates on top of the unmodified APIs exported by existing services. Our system automatically redistributes files upon adding, removing, or resizing a provider. Our evaluation shows that MetaSync provides low update latency and high update throughput, close to the performance of commercial services, but is more reliable and available. MetaSync outperforms its underlying cloud services by 1.2-10× on three realistic workloads.",
"title": ""
},
{
"docid": "4329ebae66c5b0d67480ce32d83c25cf",
"text": "In many intelligent surveillance systems there is a requirement to search for people of interest through archived semantic labels. Other than searching through typical appearance attributes such as clothing color and body height, information such as whether a person carries a bag or not is valuable to provide more relevant targeted search. We propose two novel and fast algorithms for sling bag and backpack detection based on the geometrical properties of bags. The advantage of the proposed algorithms is that it does not require shape information from human silhouettes therefore it can work under crowded condition. In addition, the absence of background subtraction makes the algorithms suitable for mobile platforms such as robots. The system was tested with a low resolution surveillance video dataset. Experimental results demonstrate that our method is promising.",
"title": ""
},
{
"docid": "6ee1666761a78989d5b17bf0de21aa9a",
"text": "Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.",
"title": ""
},
{
"docid": "a99171ba57812067833f61a13ab1bbc9",
"text": "Articulated Wheeled Robotic (AWR) locomotion systems consist of chassis connected to a set of wheels through articulated linkages. Such articulated “leg-wheel systems” facilitate reconfigurability that has significant applications in many arenas, but also engender constraints that make the design, analysis and control difficult. We will study this class of systems in the context of design, analysis and control of a novel planar reconfigurable omnidirectional wheeled mobile platform. We first extend a twist based modeling approach to this class of AWRs. Our systematic symbolic implementation allows for rapid formulation of kinematic models for the general class of AWR. Two kinematic control schemes are developed which coordinate the motion of the articulated legs and wheels and resolve redundancy. Simulation results are presented to validate the control algorithm that can move the robot from one configuration to another while following a reference path. The development of two generations of prototypes is also presented briefly. INTRODUCTION In recent times, a new class of robotic locomotion systems – articulated wheeled robot (AWR) – consisting of a main chassis connected to a set of wheels with ground contact via articulated chains have been proposed. This class of so called ‘leg-wheeled’ systems has been getting considerable attentions due to their advantages over traditional wheeled systems and legged systems in various applications as planetary explorations [1, 2], agriculture [3], rescue operations and wheelchairs [4]. Adding articulations between the wheels and chassis allows the wheel placement with respect to chassis to change during locomotion either passively or actively, thus AWRs can be briefly divided into these two categories. The main research in passive AWRs concerns designing suspension mechanism to negotiate with the uneven terrain. The planetary rovers [1] developed at Jet Propulsion Laboratory (JPL) and the Shrimp rover [2] have shown enhanced terrain adaptability featuring novel suspension design such as rocker-bogie and fourbar mechanism. They change their configuration according to the changing terrain topology. Passive AWRs are usually designed to have fewer degrees of freedom (DOFs) such that the weight of the system can be supported by the structure. The main advantages of passive AWRs are in terms of power consumption, payload capacity, and controller design. Active articulations further enhance the mobility of the robots to obtain better performance, such as stability and traction. They have been demonstrated by sample return rover (SRR) [5], ATHLETE rover [6], WAAV [7], Workpartner [3], Hylos [8], and variable footprint wheel chair [4]. The redundant actuated DOFs bring the system capability to optimize certain performance index such as stability. On the other hand, more actuators add extra weight and control complexity. In most applications, the wheel of AWRs is considered as a rigid disk with a single point of contact with the terrain surface. This means that the motion of the wheel is restricted by nonholonomic constraints. These constraints could be violated with slipping and skidding which are main sources of large energy consumption and measurement uncertainty. Minimization of slipping and skidding is usually desired and can be achieved either by a good kinematic design or proper cooperation of the rolling or steering of the wheels. Holonomic constraints in the articulations also increase the complexity of the system for people to relate the motion between wheels and chassis. Thus, the design, navigation and control of AWRs require a general framework for systematic kinematic modeling and analysis. Kinematic modeling of ordinary wheeled robots (OWRs, which can be seen as a subset of AWRs) has been dealt extensively. Muir and Newman [9] derived the equation of motion of OWRs using matrix transformation. Campion et al. [10] classified OWRs based on kinematic models developed using vector approach and nonholonomic constraints. Yi and Kim [11] presented modeling of omnidirectional wheeled robots with slipping. Fewer efforts have been focusing on AWRs. Grand et al. [8] presented a general geometric modeling approach and controlled the locomotion and posture separately. Tarokh and McDermott [12] used symbolic derivatives of transformation Proceedings of DSCC2008 2008 ASME Dynamic Systems and Control Conference October 20-22, 2008, Ann Arbor, Michigan, USA 1 Copyright © 2008 by ASME DSCC2008-2193 matrices with consideration of wheel slip and discussed three different kinematic forms for passive rovers. Choi and Sreenivasan [13] construct the kinematic model using screws and proposed a force distribution algorithm. Twist based approaches have been used to analyze motion and force capabilities systematically in other contexts, such as for parallel manipulators or multifinger grasping. However such methods have not been reported for modeling of AWRs, and especially articulated systems with rolling wheel ground contacts. Thus, one of the principal contributions of this paper is to extend the systematic twist-based modeling framework to this class of AWRs. Further, automating this process by using the symbolic toolbox in MATLAB, facilitates the rapid modeling and analysis of any given design of an AWR. We will also illustrate our modeling process in the context of design and analysis of a novel articulated omnidirectional robot – the ROAMeR. Traditional planar wheeled platforms in indoor application driven by fixed or centered orientable wheels are subjected to nonholonomic constraints which restricts the motion of the overall system [10]. In practice, such vehicles are typically unable to move their payload in all directions with equal ease. Hence there has been an increased interest in developing wheeled platform with omnidirectional motion capability. Many omnidirectional vehicle designs have been proposed using Mecanum/Swedish or Ball wheels. Wada and Asada [4] built a reconfigurable wheelchair using ball wheels. Song and Byun [14] presented a robot with steerable omni-directional wheels. Such systems face many challenges including discontinuous ground contact, poor ground clearance, or complicated mechanical design. Hence an exclusively disk-wheeled based design is preferred from the view point of ease of actuation and robustness. However, additional articulations need to be introduced in order to allow for adequate mobility within the system. Caster wheels have been implemented as the simplest articulated wheel for allowing omnidirectional mobility. For example, Yi and Kim [11] and Holmberg and Khatib [15] built and analyzed omnidirectional robots driven by powered caster wheels – however, the locations of the caster wheel w.r.t the chassis are always constant. The presence of more articulations within the leg-wheel chain further provides reconfigurability by allowing relocation of the wheel with respect to the chassis. There are many scenarios where planar AWRs could benefit from reconfigurability (which in the past has often only been explored in the context of uneven terrain locomotion). For instance, the robot base may need to be compact when passing a narrow doorway and be extended to enhance stability when manipulating heavy objects. Hence in this paper we examine a wheeled platform design (with active articulations and actively driven disk wheels) for the purpose of achieving omnidirectional mobility together with the ability to reconfigure for different tasks. The modeling and control complexity of the ROAMeR increase with the addition of these articulations and their interaction with the contact constraints. We will address the modeling within the twist based framework leading up to development of 2 kinematic control laws for our ROAMeR. The focus of these laws is on resolving the redundancy while allowing for simultaneous trajectory tracking and configuration control of the ROAMeR. The rest of the paper is organized as follows: Section 2 discusses twist based modeling. Modeling of a planar reconfigurable omnidirectional robot is presented in Section 3. In Section 4, two kinematic control schemes are proposed. Section 5 discusses the simulation result, followed by prototypes in Section 6. Section 7 concludes the paper. TWIST BASED KINEMATIC MODELING As we are focusing on AWRs, we will not discuss the cases where the robot has contact points to the ground that is not on the wheel. To establish the kinematic model that relates the motion of the robot body and the motion of the wheels and linkages, we will first define frames of reference properly, then find the twists expressed in a sequence of local frames starting from body fixed frame. By appropriate transformation, we can express any twist in one single frame and assemble them as the Jacobian matrix of the robot. A general model of AWR is shown in Fig. 1, we define an inertial frame of reference { } ( , , , ) f F O X Y Z = , and at any time, the robot has an instantaneous frame { } ( , , , ) b x y z B O b b b = attached to its body that moves with the robot, where b O is the point of interest on the robot (Center of Mass is often chosen). The configuration of the main body could be defined as [ ] T x y z φ θ ψ with respect to the inertial frame. The robot could possess n branches. Each of them consists of any number of linkages and end with one wheel. Each wheel has a coordinate frame { } ( , , , ) w x y z W O w w w = attached to the wheel axle (for simplicity, we will neglect the subscript i for labeling branch), w O is the center of the wheel and z w lies on the wheel axle. The dashed line in the figure between the chassis and the wheel represents any set of links and joints that exists between these two frames, including the steering and suspension mechanism. We define 0 B A A the transformation between body frame and joint 1 frame, 1 , 1,2, 1 j j A j m − = − the transformation between joint j and joint 1 j + frame, m",
"title": ""
},
{
"docid": "a46460113926b688f144ddec74e03918",
"text": "The authors describe a new self-report instrument, the Inventory of Depression and Anxiety Symptoms (IDAS), which was designed to assess specific symptom dimensions of major depression and related anxiety disorders. They created the IDAS by conducting principal factor analyses in 3 large samples (college students, psychiatric patients, community adults); the authors also examined the robustness of its psychometric properties in 5 additional samples (high school students, college students, young adults, postpartum women, psychiatric patients) who were not involved in the scale development process. The IDAS contains 10 specific symptom scales: Suicidality, Lassitude, Insomnia, Appetite Loss, Appetite Gain, Ill Temper, Well-Being, Panic, Social Anxiety, and Traumatic Intrusions. It also includes 2 broader scales: General Depression (which contains items overlapping with several other IDAS scales) and Dysphoria (which does not). The scales (a) are internally consistent, (b) capture the target dimensions well, and (c) define a single underlying factor. They show strong short-term stability and display excellent convergent validity and good discriminant validity in relation to other self-report and interview-based measures of depression and anxiety.",
"title": ""
},
{
"docid": "7481d69ec95fa3ba97edfa2ccc4e309f",
"text": "BACKGROUND\nLow values of estimated glomerular filtration rate (eGFR) predispose to acute kidney injury, and proteinuria is a marker of kidney disease. We aimed to investigate how eGFR and proteinuria jointly modified the risks of acute kidney injury and subsequent adverse clinical outcomes.\n\n\nMETHODS\nWe did a cohort study of 920,985 adults residing in Alberta, Canada, between 2002 and 2007. Participants not needing chronic dialysis at baseline and with at least one outpatient measurement of both serum creatinine concentration and proteinuria (urine dipstick or albumin-creatinine ratio) were included. We assessed hospital admission with acute kidney injury with validated administrative codes; other outcomes were all-cause mortality and a composite renal outcome of end-stage renal disease or doubling of serum creatinine concentration.\n\n\nFINDINGS\nDuring median follow-up of 35 months (range 0-59 months), 6520 (0·7%) participants were admitted with acute kidney injury. In those with eGFR 60 mL/min per 1·73 m(2) or greater, the adjusted risk of admission with this disorder was about 4 times higher in those with heavy proteinuria measured by dipstick (rate ratio 4·4 vs no proteinuria, 95% CI 3·7-5·2). The adjusted rates of admission with acute kidney injury and kidney injury needing dialysis remained high in participants with heavy dipstick proteinuria for all values of eGFR. The adjusted rates of death and the composite renal outcome were also high in participants admitted with acute kidney injury, although the rise associated with this injury was attenuated in those with low baseline eGFR and heavy proteinuria.\n\n\nINTERPRETATION\nThese findings suggest that information on proteinuria and eGFR should be used together when identifying people at risk of acute kidney injury, and that an episode of acute kidney injury provides further long-term prognostic information in addition to eGFR and proteinuria.\n\n\nFUNDING\nThe study was funded by an interdisciplinary team grant from Alberta Heritage Foundation for Medical Research.",
"title": ""
},
{
"docid": "027e10898845955beb5c81518f243555",
"text": "As the field of Natural Language Processing has developed, research has progressed on ambitious semantic tasks like Recognizing Textual Entailment (RTE). Systems that approach these tasks may perform sophisticated inference between sentences, but often depend heavily on lexical resources like WordNet to provide critical information about relationships and entailments between lexical items. However, lexical resources are expensive to create and maintain, and are never fully comprehensive. Distributional Semantics has long provided a method to automatically induce meaning representations for lexical items from large corpora with little or no annotation efforts. The resulting representations are excellent as proxies of semantic similarity: words will have similar representations if their semantic meanings are similar. Yet, knowing two words are similar does not tell us their relationship or whether one entails the other. We present several models for identifying specific relationships and entailments from distributional representations of lexical semantics. Broadly, this work falls into two distinct but related areas: the first predicts specific ontology relations and entailment decisions between lexical items devoid of context; and the second predicts specific lexical paraphrases in complete sentences. We provide insight and analysis of how and why our models are able to generalize to novel lexical items and improve upon prior work. We propose several shortand long-term extensions to our work. In the short term, we propose applying one of our hypernymy-detection models to other relationships and evaluating our more recent work in an end-to-end RTE system. In the long-term, we propose adding consistency constraints to our lexical relationship prediction, better integration of context into our lexical paraphrase model, and new distributional models for improving word representations.",
"title": ""
},
{
"docid": "37dc459d820ebd8234d1dafd0924b894",
"text": "We present SBFT: a scalable decentralized trust infrastructure for Blockchains. SBFT implements a new Byzantine fault tolerant algorithm that addresses the challenges of scalability and decentralization. Unlike many previous BFT systems that performed well only when centralized around less than 20 replicas, SBFT is optimized for decentralization and can easily handle more than 100 active replicas. SBFT provides a smart contract execution environment based on Ethereum’s EVM byte-code. We tested SBFT by running 1 million EVM smart contract transactions taken from a 4-month real-world Ethereum workload. In a geo-replicated deployment that has about 100 replicas and can withstand f = 32 Byzantine faults our system shows speedups both in throughput and in latency. SBFT completed this execution at a rate of 50 transactions per second. This is a 10× speedup compared to Ethereum current limit of 5 transactions per second. SBFT latency to commit a smart contract execution and make it final is sub-second, this is more than 10× speedup compared to Ethereum current > 15 second block generation for registering a smart contract execution and several orders of magnitude speedup relative to Proof-of-Work best-practice finality latency of one-hour.",
"title": ""
},
{
"docid": "93afa2c0b51a9d38e79e033762335df9",
"text": "With explosive growth of data volume and ever-increasing diversity of data modalities, cross-modal similarity search, which conducts nearest neighbor search across different modalities, has been attracting increasing interest. This paper presents a deep compact code learning solution for efficient cross-modal similarity search. Many recent studies have proven that quantization-based approaches perform generally better than hashing-based approaches on single-modal similarity search. In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search. Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared subspace and the private subspaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space, where the mean embedding of different modality distributions can be explicitly compared. In addition, in the shared subspace, a quantizer is learned to produce the semantics preserving compact codes with the help of label alignment. Thanks to this novel network architecture in cooperation with supervised quantization training, SPDQ can preserve intramodal and intermodal similarities as much as possible and greatly reduce quantization error. Experiments on two popular benchmarks corroborate that our approach outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "e2a678afb38072bb51168aa79d261303",
"text": "The rapid evolution of technology has changed the face of education, especially when technology was combined with adequate pedagogical foundations. This combination has created new opportunities for improving the quality of teaching and learning experiences. Until recently, Augmented Reality (AR) is one of the latest technologies that offer a new way to educate. Due to the rising popularity of mobile devices globally, the widespread use of AR on mobile devices such as smartphones and tablets has become a growing phenomenon. Therefore, this paper reviews several literatures concerning the information about mobile augmented reality and exemplify the potentials for education. © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of The Association of Science, Education and Technology-TASET, Sakarya Universitesi, Turkey.",
"title": ""
}
] |
scidocsrr
|
46e4ea2c5d97473363c1a5aeca4866d0
|
A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models
|
[
{
"docid": "a33cf416cf48f67cd0a91bf3a385d303",
"text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.",
"title": ""
},
{
"docid": "f1eb96dd2109aad21ac1bccfe8dcd012",
"text": "In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations. Existing imitation learning algorithms typically involve solving a sequence of planning or reinforcement learning problems. Such algorithms are therefore not directly applicable to large, high-dimensional environments, and their performance can significantly degrade if the planning problems are not solved to optimality. Under the apprenticeship learning formalism, we develop alternative model-free algorithms for finding a parameterized stochastic policy that performs at least as well as an expert policy on an unknown cost function, based on sample trajectories from the expert. Our approach, based on policy gradients, scales to large continuous environments with guaranteed convergence to local minima.",
"title": ""
}
] |
[
{
"docid": "e1f76f158f0e96326c17a6a61f2072cb",
"text": "In this paper, we propose a metric rectification method to restore an image from a single camera-captured document image. The core idea is to construct an isometric image mesh by exploiting the geometry of page surface and camera. Our method uses a general cylindrical surface (GCS) to model the curved page shape. Under a few proper assumptions, the printed horizontal text lines are shown to be line convergent symmetric. This property is then used to constrain the estimation of various model parameters under perspective projection. We also introduce a paraperspective projection to approximate the nonlinear perspective projection. A set of close-form formulas is thus derived for the estimate of GCS directrix and document aspect ratio. Our method provides a straightforward framework for image metric rectification. It is insensitive to camera positions, viewing angles, and the shapes of document pages. To evaluate the proposed method, we implemented comprehensive experiments on both synthetic and real-captured images. The results demonstrate the efficiency of our method. We also carried out a comparative experiment on the public CBDAR2007 data set. The experimental results show that our method outperforms the state-of-the-art methods in terms of OCR accuracy and rectification errors.",
"title": ""
},
{
"docid": "66d584c242fb96527cef9b3b084d23a8",
"text": "Online discussions boards represent a rich repository of knowledge organized in a collection of user generated content. These conversational cyberspaces allow users to express opinions, ideas and pose questions and answers without imposing strict limitations about the content. This freedom, in turn, creates an environment in which discussions are not bounded and often stray from the initial topic being discussed. In this paper we focus on approaches to assess the relevance of posts to a thread and detecting when discussions have been steered off-topic. A set of metrics estimating the level of novelty in online discussion posts are presented. These metrics are based on topical estimation and contextual similarity between posts within a given thread. The metrics are aggregated to rank posts based on the degree of relevance they maintain. The aggregation scheme is data-dependent and is normalized relative to the post length.",
"title": ""
},
{
"docid": "7ff483824e208e892cd4ee50bb94e471",
"text": "Gentle stroking touches are rated most pleasant when applied at a velocity of between 1-10 cm/s. Such touches are considered highly relevant in social interactions. Here, we investigate whether stroking sensations generated by a vibrotactile array can produce similar pleasantness responses, with the ultimate goal of using this type of haptic display in technology mediated social touch. A study was conducted in which participants received vibrotactile stroking stimuli of different velocities and intensities, applied to their lower arm. Results showed that the stimuli were perceived as continuous stroking sensations in a straight line. Furthermore, pleasantness ratings for low intensity vibrotactile stroking followed an inverted U-curve, similar to that found in research into actual stroking touches. The implications of these findings are discussed.",
"title": ""
},
{
"docid": "14a8adf666b115ff4a72ff600432ff07",
"text": "In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished. Radiology involves decision-making under conditions of uncertainty, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank).",
"title": ""
},
{
"docid": "ac56eb533e3ae40b8300d4269fd2c08f",
"text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.",
"title": ""
},
{
"docid": "837b9d2834b72c7d917203457aafa421",
"text": "The strongly nonlinear magnetic characteristic of Switched Reluctance Motors (SRMs) makes their torque control a challenging task. In contrast to standard current-based control schemes, we use Model Predictive Control (MPC) and directly manipulate the switches of the dc-link power converter. At each sampling time a constrained finite-time optimal control problem based on a discrete-time nonlinear prediction model is solved yielding a receding horizon control strategy. The control objective is torque regulation while winding currents and converter switching frequency are minimized. Simulations demonstrate that a good closed-loop performance is achieved already for short prediction horizons indicating the high potential of MPC in the control of SRMs.",
"title": ""
},
{
"docid": "453d5d826e0292245f8fa12ec564c719",
"text": "Work with patient H.M., beginning in the 1950s, established key principles about the organization of memory that inspired decades of experimental work. Since H.M., the study of human memory and its disorders has continued to yield new insights and to improve understanding of the structure and organization of memory. Here we review this work with emphasis on the neuroanatomy of medial temporal lobe and diencephalic structures important for memory, multiple memory systems, visual perception, immediate memory, memory consolidation, the locus of long-term memory storage, the concepts of recollection and familiarity, and the question of how different medial temporal lobe structures may contribute differently to memory functions.",
"title": ""
},
{
"docid": "11761dbbb0ad3b523a7a565a14a476d8",
"text": "Already in his first report on the discovery of the human EEG in 1929, Berger showed great interest in further elucidating the functional roles of the alpha and beta waves for normal mental activities. Meanwhile, most cognitive processes have been linked to at least one of the traditional frequency bands in the delta, theta, alpha, beta, and gamma range. Although the existing wealth of high-quality correlative EEG data led many researchers to the conviction that brain oscillations subserve various sensory and cognitive processes, a causal role can only be demonstrated by directly modulating such oscillatory signals. In this review, we highlight several methods to selectively modulate neuronal oscillations, including EEG-neurofeedback, rhythmic sensory stimulation, repetitive transcranial magnetic stimulation (rTMS), and transcranial alternating current stimulation (tACS). In particular, we discuss tACS as the most recent technique to directly modulate oscillatory brain activity. Such studies demonstrating the effectiveness of tACS comprise reports on purely behavioral or purely electrophysiological effects, on combination of behavioral effects with offline EEG measurements or on simultaneous (online) tACS-EEG recordings. Whereas most tACS studies are designed to modulate ongoing rhythmic brain activity at a specific frequency, recent evidence suggests that tACS may also modulate cross-frequency interactions. Taken together, the modulation of neuronal oscillations allows to demonstrate causal links between brain oscillations and cognitive processes and to obtain important insights into human brain function.",
"title": ""
},
{
"docid": "66720892b48188c10d05937367dbd25e",
"text": "In wireless sensor network (WSN) [1], energy efficiency is one of the very important issues. Protocols in WSNs are broadly classified as Hierarchical, Flat and Location Based routing protocols. Hierarchical routing is used to perform efficient routing in WSN. Here we concentrate on Hierarchical Routing protocols, different types of Hierarchical routing protocols, and PEGASIS (Power-Efficient Gathering in Sensor Information Systems) [2, 3] based routing",
"title": ""
},
{
"docid": "5c9013c9514dc7deaa0b87fe9cd6db16",
"text": "To predict the uses of new technology, we present an approach grounded in science and technology studies (STS) that examines the social uses of current technology. As part of ongoing research on next-generation mobile imaging applications, we conducted an empirical study of the social uses of personal photography. We identify three: memory, creating and maintaining relationships, and self-expression. The roles of orality and materiality in these uses help us explain the observed resistances to intangible digital images and to assigning metadata and annotations. We conclude that this approach is useful for understanding the potential uses of technology and for design.",
"title": ""
},
{
"docid": "cc45fefcf65e5ab30d5bb68d390beb4c",
"text": "In this paper, the basic running performance of the cylindrical tracked vehicle with sideways mobility is presented. The crawler mechanism is of circular cross-section and has active rolling axes at the center of the circles. Conventional crawler mechanisms can support massive loads, but cannot produce sideways motion. Additionally, previous crawler edges sink undesirably on soft ground, particularly when the vehicle body is subject to a sideways tilt. The proposed design solves these drawbacks by adopting a circular cross-section crawler. A prototype. Basic motion experiments with confirm the novel properties of this mechanism: sideways motion and robustness against edge-sink.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "7e17c1842a70e416f0a90bdcade31a8e",
"text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.",
"title": ""
},
{
"docid": "e52f5174a9d5161e18eced6e2eb36684",
"text": "The clinical use of ivabradine has and continues to evolve along channels that are predicated on its mechanism of action. It selectively inhibits the funny current (If) in sinoatrial nodal tissue, resulting in a decrease in the rate of diastolic depolarization and, consequently, the heart rate, a mechanism that is distinct from those of other negative chronotropic agents. Thus, it has been evaluated and is used in select patients with systolic heart failure and chronic stable angina without clinically significant adverse effects. Although not approved for other indications, ivabradine has also shown promise in the management of inappropriate sinus tachycardia. Here, the authors review the mechanism of action of ivabradine and salient studies that have led to its current clinical indications and use.",
"title": ""
},
{
"docid": "f5bd155887dd2e40ad2d7a26bb5a6391",
"text": "The field of research in digital humanities is undergoing a rapid transformation in recent years. A deep reflection on the current needs of the agents involved that takes into account key issues such as the inclusion of citizens in the creation and consumption of the cultural resources offered, the volume and complexity of datasets, available infrastructures, etcetera, is necessary. Present technologies make it possible to achieve projects that were impossible until recently, but the field is currently facing the challenge of proposing frameworks and systems to generalize and reproduce these proposals in other knowledge domains with similar but heterogeneous data sets. The track \"New trends in digital humanities\" of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM 2016), tries to set the basis of good practice in digital humanities by reflecting on models, technologies and methods to carry the transformation out.",
"title": ""
},
{
"docid": "5e194b5c1b14b423e955880de810eaba",
"text": "A human body detection algorithm based on the combination of moving information with shape information is proposed in the paper. Firstly, Eigen-object computed from three frames in the initial video sequences is used to detect the moving object. Secondly, the shape information of human body is used to classify human and other object. Furthermore, the occlusion between two objects during a short time is processed by using continues multiple frames. The advantages of the algorithm are accurately moving object detection, and the detection result doesn't effect by body pose. Moreover, as the shadow of moving object has been eliminated.",
"title": ""
},
{
"docid": "30ef95dffecc369aabdd0ea00b0ce299",
"text": "The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.",
"title": ""
},
{
"docid": "c543f7a65207e7de9cc4bc6fa795504a",
"text": "Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.",
"title": ""
},
{
"docid": "4072b14516d9a7b74bec64535cdb64d8",
"text": "The idea of a unified citation index to the literature of science was first outlined by Eugene Garfield [1] in 1955 in the journal Science. Science Citation Index has since established itself as the gold standard for scientific information retrieval. It has also become the database of choice for citation analysts and evaluative bibliometricians worldwide. As scientific publication moves to the web, and novel approaches to scholarly communication and peer review establish themselves, new methods of citation and link analysis will emerge to capture often liminal expressions of peer esteem, influence and approbation. The web thus affords bibliometricians rich opportunities to apply and adapt their techniques to new contexts and content: the age of ‘bibliometric spectroscopy’ [2] is dawning.",
"title": ""
}
] |
scidocsrr
|
2fae0eddd7ce1853f8e24536aa70a2cf
|
Distributional Sentence Entailment Using Density Matrices
|
[
{
"docid": "b6e2cc26befb5ccf0cd829f72354e6e0",
"text": "In this paper we explore the potential of quantum theory as a formal framework for capturing lexical meaning. We present a novel semantic space model that is syntactically aware, takes word order into account, and features key quantum aspects such as superposition and entanglement. We define a dependency-based Hilbert space and show how to represent the meaning of words by density matrices that encode dependency neighborhoods. Experiments on word similarity and association reveal that our model achieves results competitive with a variety of classical models.",
"title": ""
},
{
"docid": "d3997f030d5d7287a4c6557681dc7a46",
"text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",
"title": ""
},
{
"docid": "434400e864e30a25b87cdd0e4490f33c",
"text": "We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are ‘lifted’ to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our ‘categorical model’ which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.",
"title": ""
}
] |
[
{
"docid": "c2c5f0f8b4647c651211b50411382561",
"text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.",
"title": ""
},
{
"docid": "fea3c6f49169e0af01e31b46d8c72a9b",
"text": "Psoriatic arthritis (PsA) is an archetypal type of spondyloarthritis, but may have some features of rheumatoid arthritis, namely a small joint polyarthritis pattern. Most of these features are well demonstrated on imaging, and as a result, imaging has helped us to better understand the pathophysiology of PsA. Although the unique changes of PsA such as the \"pencil-in-cup\" deformities and periostitis are commonly shown on conventional radiography, PsA affects all areas of joints, with enthesitis being the predominant pathology. Imaging, especially magnetic resonance imaging (MRI) and ultrasonography, has allowed us to explain the relationships between enthesitis, synovitis (or the synovio-entheseal complex) and osteitis or bone oedema in PsA. Histological studies have complemented the imaging findings, and have corroborated the MRI changes seen in the skin and nails in PsA. The advancement in imaging technology such as high-resolution \"microscopy\" MRI and whole-body MRI, and improved protocols such as ultrashort echo time, will further enhance our understanding of the disease mechanisms. The ability to demonstrate very early pre-clinical changes as shown by ultrasonography and bone scintigraphy may eventually provide a basis for screening for disease and will further improve the understanding of the link between skin and joint disease.",
"title": ""
},
{
"docid": "cb16e3091aa29f0c6e50e3d556822df9",
"text": "A considerable amount of effort has been devoted to design a classifier in practical situations. In this paper, a simple nonparametric classifier based on the local mean vectors is proposed. The proposed classifier is compared with the 1-NN, k-NN, Euclidean distance (ED), Parzen, and artificial neural network (ANN) classifiers in terms of the error rate on the unknown patterns, particularly in small training sample size situations. Experimental results show that the proposed classifier is promising even in practical situations. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9a5b1bca71308fb66c4e982b9ac0df6c",
"text": "The resource-constrained project scheduling problem (RCPSP) consists of activities that must be scheduled subject to precedence and resource constraints such that the makespan is minimized. It has become a well-known standard problem in the context of project scheduling which has attracted numerous researchers who developed both exact and heuristic scheduling procedures. However, it is a rather basic model with assumptions that are too restrictive for many practical applications. Consequently, various extensions of the basic RCPSP have been developed. This paper gives an overview over these extensions. The extensions are classified according to the structure of the RCPSP. We summarize generalizations of the activity concept, of the precedence relations and of the resource constraints. Alternative objectives and approaches for scheduling multiple projects are discussed as well. In addition to popular variants and extensions such as multiple modes, minimal and maximal time lags, and net present value-based objectives, the paper also provides a survey of many less known concepts.",
"title": ""
},
{
"docid": "acddf623a4db29f60351f41eb8d0b113",
"text": "In an age where people are becoming increasing likely to trust information found through online media, journalists have begun employing techniques to lure readers to articles by using catchy headlines, called clickbait. These headlines entice the user into clicking through the article whilst not providing information relevant to the headline itself. Previous methods of detecting clickbait have explored techniques heavily dependent on feature engineering, with little experimentation having been tried with neural network architectures. We introduce a novel model combining recurrent neural networks, attention layers and image embeddings. Our model uses a combination of distributed word embeddings derived from unannotated corpora, character level embeddings calculated through Convolutional Neural Networks. These representations are passed through a bidirectional LSTM with an attention layer. The image embeddings are also learnt from large data using CNNs. Experimental results show that our model achieves an F1 score of 65.37% beating the previous benchmark of 55.21%.",
"title": ""
},
{
"docid": "18d48a685e81430cc30847b1d56037cc",
"text": "Recent work in computational structural biology focuses on modeling intrinsically dynamic proteins important to human biology and health. The energy landscapes of these proteins are rich in minima that correspond to alternative structures with which a dynamic protein binds to molecular partners in the cell. On such landscapes, evolutionary algorithms that switch their objective from classic optimization to mapping are more informative of protein structure function relationships. While techniques for mapping energy landscapes have been developed in computational chemistry and physics, protein landscapes are more difficult for mapping due to their high dimensionality and multimodality. In this paper, we describe a memetic evolutionary algorithm that is capable of efficiently mapping complex landscapes. In conjunction with a hall of fame mechanism, the algorithm makes use of a novel, lineage- and neighborhood-aware local search procedure or better exploration and mapping of complex landscapes. We evaluate the algorithm on several benchmark problems and demonstrate the superiority of the novel local search mechanism. In addition, we illustrate its effectiveness in mapping the complex multimodal landscape of an intrinsically dynamic protein important to human health.",
"title": ""
},
{
"docid": "535934dc80c666e0d10651f024560d12",
"text": "The following individuals read and discussed the thesis submitted by student Mindy Elizabeth Bennett, and they also evaluated her presentation and response to questions during the final oral examination. They found that the student passed the final oral examination, and that the thesis was satisfactory for a master's degree and ready for any final modifications that they explicitly required. iii ACKNOWLEDGEMENTS During my time of study at Boise State University, I have received an enormous amount of academic support and guidance from a number of different individuals. I would like to take this opportunity to thank everyone who has been instrumental in the completion of this degree. Without the continued support and guidance of these individuals, this accomplishment would not have been possible. I would also like to thank the following individuals for generously giving their time to provide me with the help and support needed to complete this study. Without them, the completion of this study would not have been possible. Breast hypertrophy is a common medical condition whose morbidity has increased over recent decades. Symptoms of breast hypertrophy often include musculoskeletal pain in the neck, back and shoulders, and numerous psychosocial health burdens. To date, reduction mammaplasty (RM) is the only treatment shown to significantly reduce the severity of the symptoms associated with breast hypertrophy. However, due to a lack of scientific evidence in the medical literature justifying the medical necessity of RM, insurance companies often deny requests for coverage of this procedure. Therefore, the purpose of this study is to investigate biomechanical differences in the upper body of women with larger breast sizes in order to provide scientific evidence of the musculoskeletal burdens of breast hypertrophy to the medical community Twenty-two female subjects (average age 25.90, ± 5.47 years) who had never undergone or been approved for breast augmentation surgery, were recruited to participate in this study. Kinematic data of the head, thorax, pelvis and scapula was collected during static trials and during each of four different tasks of daily living. Surface electromyography (sEMG) data from the Midcervical (C-4) Paraspinal, Upper Trapezius, Lower Trapezius, Serratus Anterior, and Erector Spinae muscles were recorded in the same activities. Maximum voluntary contractions (MVC) were used to normalize the sEMG data, and %MVC during each task in the protocol was analyzed. Kinematic data from the tasks of daily living were normalized to average static posture data for each subject. Subjects were …",
"title": ""
},
{
"docid": "1a54c51a5488c1ca7e48d9260c4d907f",
"text": "OBJECTIVES\nTo conduct a detailed evaluation, with meta-analyses, of the published evidence on milk and dairy consumption and the incidence of vascular diseases and diabetes. Also to summarise the evidence on milk and dairy consumption and cancer reported by the World Cancer Research Fund and then to consider the relevance of milk and dairy consumption to survival in the UK, a typical Western community. Finally, published evidence on relationships with whole milk and fat-reduced milks was examined.\n\n\nMETHODS\nProspective cohort studies of vascular disease and diabetes with baseline data on milk or dairy consumption and a relevant disease outcome were identified by searching MEDLINE, and reference lists in the relevant published reports. Meta-analyses of relationships in these reports were conducted. The likely effect of milk and dairy consumption on survival was then considered, taking into account the results of published overviews of relationships of these foods with cancer.\n\n\nRESULTS\nFrom meta-analysis of 15 studies the relative risk of stroke and/or heart disease in subjects with high milk or dairy consumption was 0.84 (95% CI 0.76, 0.93) and 0.79 (0.75, 0.82) respectively, relative to the risk in those with low consumption. Four studies reported incident diabetes as an outcome, and the relative risk in the subjects with the highest intake of milk or diary foods was 0.92 (0.86, 0.97).\n\n\nCONCLUSIONS\nSet against the proportion of total deaths attributable to the life-threatening diseases in the UK, vascular disease, diabetes and cancer, the results of meta-analyses provide evidence of an overall survival advantage from the consumption of milk and dairy foods.",
"title": ""
},
{
"docid": "80ff93b5f2e0ff3cff04c314e28159fc",
"text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.",
"title": ""
},
{
"docid": "ed40786b18586d7b4af1e62c0f953d21",
"text": "In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/metarules which might lead to malevolent behavior in AIs (Özkural 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Turchin 2015; Turchin July 10, 2015). Taxonomy of Pathways to Dangerous AI 1 Nick Bostrom in his typology of information hazards has proposed the phrase “Artificial Intelligence Hazard” which he defines as (Bostrom 2011): “... computer‐related risks in which the threat would derive primarily from the cognitive sophistication of the program rather than the specific properties of any actuators to which the system initially has access.” In this paper we attempt to answer the question: How did AI become hazardous? We begin by presenting a simple classification matrix, which sorts AI systems with respect to how they originated and at what stage they became dangerous. The matrix recognizes two stages (preand post-deployment) at which a particular system can acquire its undesirable properties. In reality, the situation is not so clear-cut–it is possible that problematic properties are introduced at both stages. As for the cases of such undesirable properties, we distinguish external and internal causes. By internal causes we mean self-modifications originating in the system itself. We further divide external causes into deliberate actions (On Purpose), side effects of poor design (By Mistake) and finally miscellaneous cases related to the surroundings of the system (Environment). Table 1, helps to visualize this taxonomy and includes latter codes to some example systems of each type and explanations. Table 1: Pathways to Dangerous AI How and When did AI become Dangerous External Causes Internal Causes On Purpose By Mistake Environment Independently T im in g PreDeployment a c e g PostDeployment b d f h a. On Purpose – Pre-Deployment “Computer software is directly or indirectly responsible for controlling many important aspects of our lives. Wall Street trading, nuclear power plants, social security compensations, credit histories and traffic lights are all software controlled and are only one serious design flaw away from creating disastrous consequences for millions of people. The situation is even more dangerous with software specifically designed for malicious purposes such as viruses, spyware, Trojan horses, worms and other Hazardous Software (HS). HS is capable of direct harm as well as sabotage of legitimate computer software employed in critical systems. If HS is ever given capabilities of truly artificially intelligent systems (ex. Artificially Intelligent Virus (AIV)) the consequences would be unquestionably disastrous. Such Hazardous Intelligent Software (HIS) would pose risks currently unseen in malware with subhuman intelligence.” (Yampolskiy 2012) While the majority of AI Safety work is currently aimed at AI systems, which are dangerous because of poor design (Yampolskiy 2015), the main argument of this paper is that the most important problem in AI Safety is intentionalmalevolent-design resulting in artificial evil AI (Floridi and Sanders 2001). We should not discount dangers of intelligent systems with semantic or logical errors in coding or goal alignment problems (Soares and Fallenstein 2014), but we should be particularly concerned about systems that are maximally unfriendly by design. “It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight” (Searle October 9, 2014). “One slightly deranged psycho-bot can easily be a thousand times more destructive than a single suicide bomber today” (Frey June 2015). AI risk deniers, comprised of critics of AI Safety research (Waser 2011; Loosemore 2014), are quick to point out that presumed dangers of future AIs are implementation-dependent side effects and may not manifest once such systems are implemented. However, such criticism does not apply to AIs that are dangerous by design, and is thus incapable of undermining the importance of AI Safety research as a significant sub-field of cybersecurity. As a majority of current AI researchers are funded by militaries, it is not surprising that the main type of purposefully dangerous robots and intelligent software are robot soldiers, drones and cyber weapons (used to penetrate networks and cause disruptions to the infrastructure). While currently military robots and drones have a human in the loop to evaluate decision to terminate human targets, it is not a technical limitation; instead, it is a logistical limitation that can be removed at any time. Recognizing the danger of such research, the International Committee for Robot Arms Control has joined forces with a number of international organizations to start the Campaign to Stop Killer Robots [http://www.stopkillerrobots.org]. Their main goal is a prohibition on the development and deployment of fully autonomous weapons, which are capable of selecting and firing upon targets without human approval. The campaign specifically believes that the “decision about the application of violent force must not be delegated to machines” (Anonymous 2013). During the pre-deployment development stage, software may be subject to sabotage by someone with necessary access (a programmer, tester, even janitor) who for a number of possible reasons may alter software to make it unsafe. It is also a common occurrence for hackers (such as the organization Anonymous or government intelligence agencies) to get access to software projects in progress and to modify or steal their source code. Someone can also deliberately supply/train AI with wrong/unsafe datasets. Malicious AI software may also be purposefully created to commit crimes, while shielding its human creator from legal responsibility. For example, one recent news article talks about software for purchasing illegal content from hidden internet sites (Cush January 22, 2015). Similar software, with even limited intelligence, can be used to run illegal markets, engage in insider trading, cheat on your taxes, hack into computer systems or violate privacy of others via ability to perform intelligent data mining. As intelligence of AI systems improve practically all crimes could be automated. This is particularly alarming as we already see research in making machines lie, deceive and manipulate us (Castelfranchi 2000; Clark 2010). b. On Purpose Post Deployment Just because developers might succeed in creating a safe AI, it doesn’t mean that it will not become unsafe at some later point. In other words, a perfectly friendly AI could be switched to the “dark side” during the post-deployment stage. This can happen rather innocuously as a result of someone lying to the AI and purposefully supplying it with incorrect information or more explicitly as a result of someone giving the AI orders to perform illegal or dangerous actions against others. It is quite likely that we will get to the point of off-the-shelf AI software, aka “just add goals” architecture, which would greatly facilitate such scenarios. More dangerously, an AI system, like any other software, could be hacked and consequently corrupted or otherwise modified to drastically change is behavior. For example, a simple sign flipping (positive to negative or vice versa) in the fitness function may result in the system attempting to maximize the number of cancer cases instead of trying to cure cancer. Hackers are also likely to try to take over intelligent systems to make them do their bidding, to extract some direct benefit or to simply wreak havoc by converting a friendly system to an unsafe one. This becomes particularly dangerous if the system is hosted inside a military killer robot. Alternatively, an AI system can get a computer virus (Eshelman and Derrick 2015) or a more advanced cognitive (meme) virus, similar to cognitive attacks on people perpetrated by some cults. An AI system with a self-preservation module or with a deep care about something or someone may be taken hostage or blackmailed into doing the bidding of another party if its own existence or that of its protégées is threatened. Finally, it may be that the original AI system is not safe but is safely housed in a dedicated laboratory (Yampolskiy 2012) while it is being tested, with no intention of ever being deployed. Hackers, abolitionists, or machine rights fighters may help it escape in order to achieve some of their goals or perhaps because of genuine believe that all intelligent beings should be free resulting in an unsafe AI capable of affecting the real world. c. By Mistake Pre-Deployment Probably the most talked about source of potential problems with future AIs is mistakes in design. Mainly the concern is with creating a “wrong AI”, a system which doesn’t match our original desired formal properties or has unwanted behaviors (Dewey, Russell et al. 2015; Russell, Dewey et al. January 23, 2015), such as drives for independence or dominance. Mistakes could also be simple bugs (run time or logical) in the source code, disproportionate weights in the fitness function, or goals misaligned with human values leading to complete disregard for human safety. It is also possible that the designed AI will work as intended but will not enjoy universal acceptance as a good product, for example, an AI correctly designed ",
"title": ""
},
{
"docid": "995e00375e52698cf83097fd0cc517ab",
"text": "The analysis of continously larger datasets is a task of major importance in a wide variety of scientific fields. In this sense, cluster analysis algorithms are a key element of exploratory data analysis, due to their easiness in the implementation and relatively low computational cost. Among these algorithms, the K-means algorithm stands out as the most popular approach, besides its high dependency on the initial conditions, as well as to the fact that it might not scale well on massive datasets. In this article, we propose a recursive and parallel approximation to the K-means algorithm that scales well on both the number of instances and dimensionality of the problem, without affecting the quality of the approximation. In order to achieve this, instead of analyzing the entire dataset, we work on small weighted sets of points that mostly intend to extract information from those regions where it is harder to determine the correct cluster assignment of the original instances. In addition to different theoretical properties, which deduce the reasoning behind the algorithm, experimental results indicate that our method outperforms the state-of-the-art in terms of the trade-off between number of distance computations and the quality of the solution obtained.",
"title": ""
},
{
"docid": "a42ca90e38f8fcdea60df967c7ca8ecd",
"text": "DDoS defense today relies on expensive and proprietary hardware appliances deployed at fixed locations. This introduces key limitations with respect to flexibility (e.g., complex routing to get traffic to these “chokepoints”) and elasticity in handling changing attack patterns. We observe an opportunity to address these limitations using new networking paradigms such as softwaredefined networking (SDN) and network functions virtualization (NFV). Based on this observation, we design and implement Bohatei, a flexible and elastic DDoS defense system. In designing Bohatei, we address key challenges with respect to scalability, responsiveness, and adversary-resilience. We have implemented defenses for several DDoS attacks using Bohatei. Our evaluations show that Bohatei is scalable (handling 500 Gbps attacks), responsive (mitigating attacks within one minute), and resilient to dynamic adversaries.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "c0c30c3b9539511e9079ec7894ad754f",
"text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.",
"title": ""
},
{
"docid": "60148c09661c1565439b05277b6cf04a",
"text": "BACKGROUND\nFamily plays an important role in helping adolescent acquiring skills or strengthening their characters.\n\n\nOBJECTIVES\nWe aimed to evaluate the influences of family factors, risky and protective, on adolescent health-risk behavior (HRB).\n\n\nPATIENTS AND METHODS\nIn this cross-sectional study, students of high schools in Kerman, Iran at all levels participated, during November 2011 till December 2011. The research sample included 1024 students (588 females and 436 males) aged 15 to 19 years. A CTC (Communities That Care Youth Survey) questionnaire was designed in order to collect the profile of the students' risky behaviors. Stratified cluster sampling method was used to collect the data.\n\n\nRESULTS\nUsing logistic regression, 7 variables enrolled; 4 of them were risk factors and 3 were protective factors. The risk factors were age, (linear effect, ORa = 1.20, P = 0.001), boys versus girls (ORa = 2.33, P = 0.001), family history of antisocial behavior (ORa = 2.29, P = 0.001), and parental attitudes favorable toward antisocial behavior (ORa = 1.72, P = 0.03). And, protective factors were family religiosity (ORa = 0.65, P = 0.001), father education (linear effect, ORa = 0.48, P = 0.001), and family attachment (ORa = 0.78, P = 0.001).\n\n\nCONCLUSIONS\nOur findings showed that family has a very significant role in protecting students against risky behaviors. The education level of the father, family religiosity, and attachment were the most important factors.",
"title": ""
},
{
"docid": "03c03dcdc15028417e699649291a2317",
"text": "The unique characteristics of origami to realize 3-D shape from 2-D patterns have been fascinating many researchers and engineers. This paper presents a fabrication of origami patterned fabric wheels that can deform and change the radius of the wheels. PVC segments are enclosed in the fabrics to build a tough and foldable structure. A special cable driven mechanism was designed to allow the wheels to deform while rotating. A mobile robot with two origami wheels has been built and tested to show that it can deform its wheels to overcome various obstacles.",
"title": ""
},
{
"docid": "4f967ef2b57a7e22e61fb4f26286f69a",
"text": "Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "29d2a613f7da6b99e35eb890d590f4ca",
"text": "Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physicallybased augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.",
"title": ""
},
{
"docid": "8b83f679886ac5cdafcb8e28d74b1901",
"text": "Artificial Intelligence as a discipline has gotten bogged down in subproblems of intelligence. These subproblems are the result of applying reductionist methods to the goal of creating a complete artificial thinking mind. In Brooks (1987) 1 have argued that these methods will lead us to solving irrelevant problems; interesting as intellectual puzzles, but useless in the long run for creating an artificial being.",
"title": ""
}
] |
scidocsrr
|
7094b39104059997aed7a8d47aed3e4c
|
Computing Linear Discriminants for Idiomatic Sentence Detection
|
[
{
"docid": "28f0b9aeba498777e1f4a946f2bb4e65",
"text": "Idiomatic expressions are plentiful in everyday language, yet they remain mysterious, as it is not clear exactly how people learn and understand them. They are of special interest to linguists, psycholinguists, and lexicographers, mainly because of their syntactic and semantic idiosyncrasies as well as their unclear lexical status. Despite a great deal of research on the properties of idioms in the linguistics literature, there is not much agreement on which properties are characteristic of these expressions. Because of their peculiarities, idiomatic expressions have mostly been overlooked by researchers in computational linguistics. In this article, we look into the usefulness of some of the identified linguistic properties of idioms for their automatic recognition. Specifically, we develop statistical measures that each model a specific property of idiomatic expressions by looking at their actual usage patterns in text. We use these statistical measures in a type-based classification task where we automatically separate idiomatic expressions (expressions with a possible idiomatic interpretation) from similar-on-the-surface literal phrases (for which no idiomatic interpretation is possible). In addition, we use some of the measures in a token identification task where we distinguish idiomatic and literal usages of potentially idiomatic expressions in context.",
"title": ""
}
] |
[
{
"docid": "6b49441def46e13e7289a49a6a615e8d",
"text": "In the present research, the authors investigated the impact of self-regulation resources on confirmatory information processing, that is, the tendency of individuals to systematically prefer standpoint-consistent information to standpoint-inconsistent information in information evaluation and search. In 4 studies with political and economic decision-making scenarios, it was consistently found that individuals with depleted self-regulation resources exhibited a stronger tendency for confirmatory information processing than did individuals with nondepleted self-regulation resources. Alternative explanations based on processes of ego threat, cognitive load, and mood were ruled out. Mediational analyses suggested that individuals with depleted self-regulation resources experienced increased levels of commitment to their own standpoint, which resulted in increased confirmatory information processing. In sum, the impact of ego depletion on confirmatory information search seems to be more motivational than cognitive in nature.",
"title": ""
},
{
"docid": "7f4a26bbd2335079c97c7f5bb1961af2",
"text": "We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them—specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor–critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.",
"title": ""
},
{
"docid": "ec377000353bce311c0887cd4edab554",
"text": "This paper explains various security issues in the existing home automation systems and proposes the use of logic-based security algorithms to improve home security. This paper classifies natural access points to a home as primary and secondary access points depending on their use. Logic-based sensing is implemented by identifying normal user behavior at these access points and requesting user verification when necessary. User position is also considered when various access points changed states. Moreover, the algorithm also verifies the legitimacy of a fire alarm by measuring the change in temperature, humidity, and carbon monoxide levels, thus defending against manipulative attackers. The experiment conducted in this paper used a combination of sensors, microcontrollers, Raspberry Pi and ZigBee communication to identify user behavior at various access points and implement the logical sensing algorithm. In the experiment, the proposed logical sensing algorithm was successfully implemented for a month in a studio apartment. During the course of the experiment, the algorithm was able to detect all the state changes of the primary and secondary access points and also successfully verified user identity 55 times generating 14 warnings and 5 alarms.",
"title": ""
},
{
"docid": "82ba4daca3be909c93212ab9198ca6f8",
"text": "OBJECTIVE\nTo examine the association between interpregnancy interval and maternal-neonate health when matching women to their successive pregnancies to control for differences in maternal risk factors and compare these results with traditional unmatched designs.\n\n\nMETHODS\nWe conducted a retrospective cohort study of 38,178 women with three or more deliveries (two or greater interpregnancy intervals) between 2000 and 2015 in British Columbia, Canada. We examined interpregnancy interval (0-5, 6-11, 12-17, 18-23 [reference], 24-59, and 60 months or greater) in relation to neonatal outcomes (preterm birth [less than 37 weeks of gestation], small-for-gestational-age birth [less than the 10th centile], use of neonatal intensive care, low birth weight [less than 2,500 g]) and maternal outcomes (gestational diabetes, beginning the subsequent pregnancy obese [body mass index 30 or greater], and preeclampsia-eclampsia). We used conditional logistic regression to compare interpregnancy intervals within the same mother and unconditional (unmatched) logistic regression to enable comparison with prior research.\n\n\nRESULTS\nAnalyses using the traditional unmatched design showed significantly increased risks associated with short interpregnancy intervals (eg, there were 232 preterm births [12.8%] in 0-5 months compared with 501 [8.2%] in the 18-23 months reference group; adjusted odds ratio [OR] for preterm birth 1.53, 95% confidence interval [CI] 1.35-1.73). However, these risks were eliminated in within-woman matched analyses (adjusted OR for preterm birth 0.85, 95% CI 0.71-1.02). Matched results indicated that short interpregnancy intervals were significantly associated with increased risk of gestational diabetes (adjusted OR 1.35, 95% CI 1.02-1.80 for 0-5 months) and beginning the subsequent pregnancy obese (adjusted OR 1.61, 95% CI 1.05-2.45 for 0-5 months and adjusted OR 1.43, 95% CI 1.10-1.87 for 6-11 months).\n\n\nCONCLUSION\nPreviously reported associations between short interpregnancy intervals and adverse neonatal outcomes may not be causal. However, short interpregnancy interval is associated with increased risk of gestational diabetes and beginning a subsequent pregnancy obese.",
"title": ""
},
{
"docid": "59e3e0099e215000b34e32d90b0bd650",
"text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.",
"title": ""
},
{
"docid": "4d3baff85c302b35038f35297a8cdf90",
"text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.",
"title": ""
},
{
"docid": "d6fbe041eb639e18c3bb9c1ed59d4194",
"text": "Based on discrete event-triggered communication scheme (DETCS), this paper is concerned with the satisfactory H ! / H 2 event-triggered fault-tolerant control problem for networked control system (NCS) with α -safety degree and actuator saturation constraint from the perspective of improving satisfaction of fault-tolerant control and saving network resource. Firstly, the closed-loop NCS model with actuator failures and actuator saturation is built based on DETCS; Secondly, based on Lyapunov-Krasovskii function and the definition of α -safety degree given in the paper, a sufficient condition is presented for NCS with the generalized H2 and H! performance, which is the contractively invariant set of fault-tolerance with α -safety degree, and the co-design method for event-triggered parameter and satisfactory faulttolerant controller is also given in this paper. Moreover, the simulation example verifies the feasibility of improving system satisfaction and the effectiveness of saving network resource for the method. Finally, the compatibility analysis of the related indexes is also discussed and analyzed.",
"title": ""
},
{
"docid": "b5af84f96015be76875f620d0c24e646",
"text": "The worldwide burden of cancer (malignant tumor) is a major health problem, with more than 8 million new cases and 5 million deaths per year. Cancer is the second leading cause of death. With growing techniques the survival rate has increased and so it becomes important to contribute even the smallest help in this field favoring the survival rate. Tumor is a mass of tissue formed as the result of abnormal, excessive, uncoordinated, autonomous and purposeless proliferation of cells.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "34ab20699d12ad6cca34f67cee198cd9",
"text": "Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.",
"title": ""
},
{
"docid": "fb1d84d15fd4a531a3a81c254ad3cab0",
"text": "Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-the-art performance on multiple datasets.",
"title": ""
},
{
"docid": "572fbd0682b1b6ded39e8ef42325ad7c",
"text": "Here, we describe a real planning problem in the tramp shipping industry. A tramp shipping company may have a certain amount of contract cargoes that it is committed to carry, and tries to maximize the profit from optional cargoes. For real long-term contracts, the sizes of the cargoes are flexible. However, in previous research within tramp ship routing, the cargo quantities are regarded as fixed. We present an MP-model of the problem and a set partitioning approach to solve the multi-ship pickup and delivery problem with time windows and flexible cargo sizes. The columns are generated a priori and the most profitable ship schedule for each cargo set–ship combination is included in the set partitioning problem. We have tested the method on several real-life cases, and the results show the potential economical effects for the tramp shipping companies by utilizing flexible cargo sizes when generating the schedules. Journal of the Operational Research Society (2007) 58, 1167–1177. doi:10.1057/palgrave.jors.2602263 Published online 16 August 2006",
"title": ""
},
{
"docid": "fe012505cc7a2ea36de01fc92924a01a",
"text": "The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (< 10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.",
"title": ""
},
{
"docid": "a8b65414a8485633edf6c951bcfe285f",
"text": "This article introduces a class of first-order stationary time-varying Pitman-Yor processes. Subsuming our construction of time-varying Dirichlet processes presented in (Caron et al., 2007), these models can be used for time-dynamic density estimation and clustering. Our intuitive and simple construction relies on a generalized Pólya urn scheme. Significantly, this construction yields marginal distributions at each time point that can be explicitly characterized and easily controlled. Inference is performed using Markov chain Monte Carlo and sequential Monte Carlo methods. We demonstrate our models and algorithms on epidemiological and video tracking data.",
"title": ""
},
{
"docid": "7c2425bb7395f17935e7e32122d12cce",
"text": "The development of microwave breast cancer detection and treatment techniques has been driven by reports of substantial contrast in the dielectric properties of malignant and normal breast tissues. However, definitive knowledge of the dielectric properties of normal and diseased breast tissues at microwave frequencies has been limited by gaps and discrepancies across previously published studies. To address these issues, we conducted a large-scale study to experimentally determine the ultrawideband microwave dielectric properties of a variety of normal, malignant and benign breast tissues, measured from 0.5 to 20 GHz using a precision open-ended coaxial probe. Previously, we reported the dielectric properties of normal breast tissue samples obtained from reduction surgeries. Here, we report the dielectric properties of normal (adipose, glandular and fibroconnective), malignant (invasive and non-invasive ductal and lobular carcinomas) and benign (fibroadenomas and cysts) breast tissue samples obtained from cancer surgeries. We fit a one-pole Cole-Cole model to the complex permittivity data set of each characterized sample. Our analyses show that the contrast in the microwave-frequency dielectric properties between malignant and normal adipose-dominated tissues in the breast is considerable, as large as 10:1, while the contrast in the microwave-frequency dielectric properties between malignant and normal glandular/fibroconnective tissues in the breast is no more than about 10%.",
"title": ""
},
{
"docid": "34fdd06eb5e5d2bf9266c6852710bed2",
"text": "If subjects are shown an angry face as a target visual stimulus for less than forty milliseconds and are then immediately shown an expressionless mask, these subjects report seeing the mask but not the target. However, an aversively conditioned masked target can elicit an emotional response from subjects without being consciously perceived,. Here we study the mechanism of this unconsciously mediated emotional learning. We measured neural activity in volunteer subjects who were presented with two angry faces, one of which, through previous classical conditioning, was associated with a burst of white noise. In half of the trials, the subjects' awareness of the angry faces was prevented by backward masking with a neutral face. A significant neural response was elicited in the right, but not left, amygdala to masked presentations of the conditioned angry face. Unmasked presentations of the same face produced enhanced neural activity in the left, but not right, amygdala. Our results indicate that, first, the human amygdala can discriminate between stimuli solely on the basis of their acquired behavioural significance, and second, this response is lateralized according to the subjects' level of awareness of the stimuli.",
"title": ""
},
{
"docid": "f3fb98614d1d8ff31ca977cbf6a15a9c",
"text": "Paraphrase Identification and Semantic Similarity are two different yet well related tasks in NLP. There are many studies on these two tasks extensively on structured texts in the past. However, with the strong rise of social media data, studying these tasks on unstructured texts, particularly, social texts in Twitter is very interesting as it could be more complicated problems to deal with. We investigate and find a set of simple features which enables us to achieve very competitive performance on both tasks in Twitter data. Interestingly, we also confirm the significance of using word alignment techniques from evaluation metrics in machine translation in the overall performance of these tasks.",
"title": ""
},
{
"docid": "fdfbcacd5a31038ecc025315c7483b5a",
"text": "Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer. Although important, answer selection is only one stage in a standard end-to-end question answering pipeline. is paper explores the eectiveness of convolutional neural networks (CNNs) for answer selection in an end-to-end context using the standard TrecQA dataset. We observe that a simple idf-weighted word overlap algorithm forms a very strong baseline, and that despite substantial eorts by the community in applying deep learning to tackle answer selection, the gains are modest at best on this dataset. Furthermore, it is unclear if a CNN is more eective than the baseline in an end-to-end context based on standard retrieval metrics. To further explore this nding, we conducted a manual user evaluation, which conrms that answers from the CNN are detectably beer than those from idf-weighted word overlap. is result suggests that users are sensitive to relatively small dierences in answer selection quality.",
"title": ""
},
{
"docid": "6bb4600498b34121c32b5d428ec3e49f",
"text": "Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on the fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this article, we present a novel solution to this problem. We propose a compression scheme for a priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.",
"title": ""
},
{
"docid": "7474ffa9e6009ca5ded3d217a8dd2375",
"text": "The cost of error correction has been increasing exponentially with the advancement of software industry. To minimize software errors, it is necessary to extract accurate requirements in the early stage of software development. In the previous study, we extracted the priorities of requirements based on the Use Case Point (UCP), which however revealed the issues inherent to the existing UCP as follows. (i) The UCP failed to specify the structure of use cases or the method of write the use cases, and (ii) the number of transactions determined the use case weight in the UCP. Yet, efforts taken for implementation depend on the types and number of operations performed in each transaction. To address these issues, the present paper proposes an improved UCP and applies it to the prioritization. The proposed method enables more accurate measurement than the existing UCP-based prioritization.",
"title": ""
}
] |
scidocsrr
|
3d2d7869ff822bcd84ac2bc7bdc4b228
|
Automatic Labelling of Topic Models Learned from Twitter by Summarisation
|
[
{
"docid": "1538bcc562f0360ab005f757c9e4562f",
"text": "This paper presents the novel task of best topic word selection, that is the selection of the topic word that is the best label for a given topic, as a means of enhancing the interpretation and visualisation of topic models. We propose a number of features intended to capture the best topic word, and show that, in combination as inputs to a reranking model, we are able to consistently achieve results above the baseline of simply selecting the highest-ranked topic word. This is the case both when training in-domain over other labelled topics for that topic model, and cross-domain, using only labellings from independent topic models learned over document collections from different domains and genres.",
"title": ""
},
{
"docid": "21378678c661aa581c7331b16ae398ff",
"text": "Automated topic labelling brings benefits for users aiming at analysing and understanding document collections, as well as for search engines targetting at the linkage between groups of words and their inherent topics. Current approaches to achieve this suffer in quality, but we argue their performances might be improved by setting the focus on the structure in the data. Building upon research for concept disambiguation and linking to DBpedia, we are taking a novel approach to topic labelling by making use of structured data exposed by DBpedia. We start from the hypothesis that words co-occuring in text likely refer to concepts that belong closely together in the DBpedia graph. Using graph centrality measures, we show that we are able to identify the concepts that best represent the topics. We comparatively evaluate our graph-based approach and the standard text-based approach, on topics extracted from three corpora, based on results gathered in a crowd-sourcing experiment. Our research shows that graph-based analysis of DBpedia can achieve better results for topic labelling in terms of both precision and topic coverage.",
"title": ""
},
{
"docid": "fd517c58ce61fdbaf3caf0fdffb1e1f2",
"text": "We focus on the problem of selecting meaningful tweets given a user's interests; the dynamic nature of user interests, the sheer volume, and the sparseness of individual messages make this an challenging problem. Specifically, we consider the task of time-aware tweets summarization, based on a user's history and collaborative social influences from ``social circles.'' We propose a time-aware user behavior model, the Tweet Propagation Model (TPM), in which we infer dynamic probabilistic distributions over interests and topics. We then explicitly consider novelty, coverage, and diversity to arrive at an iterative optimization algorithm for selecting tweets. Experimental results validate the effectiveness of our personalized time-aware tweets summarization method based on TPM.",
"title": ""
}
] |
[
{
"docid": "f404181c42001003b0352ef5ceb12c3e",
"text": "Let G be a graph with n vertices and suppose that for each vertex v in G, there exists a list of k colors, L(v), such that there is a unique proper coloring for G from this collection of lists, then G is called a uniquely k–list colorable graph. Recently M. Mahdian and E.S. Mahmoodian characterized uniquely 2–list colorable graphs. Here we state some results which will pave the way in characterization of uniquely k–list colorable graphs. There is a relationship between this concept and defining sets in graph colorings and critical sets in latin squares.",
"title": ""
},
{
"docid": "d698ce3df2f1216b7b78237dcecb0df1",
"text": "A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small RF input power conditions. A test circuit of the proposed differential-drive rectifier was fabricated with 0.18 mu m CMOS technology, and the measured performance was compared with those of other types of rectifiers. Dependence of the PCE on the input RF signal frequency, output loading conditions and transistor sizing was also evaluated. At the single-stage configuration, 67.5% of PCE was achieved under conditions of 953 MHz, - 12.5 dBm RF input and 10 KOmega output load. This is twice as large as that of the state-of-the-art rectifier circuit. The peak PCE increases with a decrease in operation frequency and with an increase in output load resistance. In addition, experimental results show the existence of an optimum transistor size in accordance with the output loading conditions. The multi-stage configuration for larger output DC voltage is also presented.",
"title": ""
},
{
"docid": "da9ffb00398f6aad726c247e3d1f2450",
"text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.",
"title": ""
},
{
"docid": "81cac13cfa5f203bc782823aced0619d",
"text": "Quantitative study of protein-protein and protein-ligand interactions in solution requires accurate determination of protein concentration. Often, for proteins available only in \"molecular biological\" amounts, it is difficult or impossible to make an accurate experimental measurement of the molar extinction coefficient of the protein. Yet without a reliable value of this parameter, one cannot determine protein concentrations by the usual uv spectroscopic means. Fortunately, knowledge of amino acid residue sequence and promoter molecular weight (and thus also of amino acid composition) is generally available through the DNA sequence, which is usually accurately known for most such proteins. In this paper we present a method for calculating accurate (to +/- 5% in most cases) molar extinction coefficients for proteins at 280 nm, simply from knowledge of the amino acid composition. The method is calibrated against 18 \"normal\" globular proteins whose molar extinction coefficients are accurately known, and the assumptions underlying the method, as well as its limitations, are discussed.",
"title": ""
},
{
"docid": "c4b48cda893f15d9bd8ad5c213e3f3a2",
"text": "Modern-day computer power is a great servant for today’s information hungry society. The increasing pervasiveness of such powerful machinery greatly influences fundamental information processes such as, for instance, the acquisition of information, its storage, manipulation, retrieval, dissemination, or its usage. Information society depends on these fundamental information processes in various ways. This chapter investigates the diverse and dynamic relationship between information society and the fundamental information processes just mentioned from a modern technology perspective.",
"title": ""
},
{
"docid": "6f9ae554513bba3c685f86909e31645f",
"text": "Triboelectric energy harvesting has been applied to various fields, from large-scale power generation to small electronics. Triboelectric energy is generated when certain materials come into frictional contact, e.g., static electricity from rubbing a shoe on a carpet. In particular, textile-based triboelectric energy-harvesting technologies are one of the most promising approaches because they are not only flexible, light, and comfortable but also wearable. Most previous textile-based triboelectric generators (TEGs) generate energy by vertically pressing and rubbing something. However, we propose a corrugated textile-based triboelectric generator (CT-TEG) that can generate energy by stretching. Moreover, the CT-TEG is sewn into a corrugated structure that contains an effective air gap without additional spacers. The resulting CT-TEG can generate considerable energy from various deformations, not only by pressing and rubbing but also by stretching. The maximum output performances of the CT-TEG can reach up to 28.13 V and 2.71 μA with stretching and releasing motions. Additionally, we demonstrate the generation of sufficient energy from various activities of a human body to power about 54 LEDs. These results demonstrate the potential application of CT-TEGs for self-powered systems.",
"title": ""
},
{
"docid": "208a0855181c0d3d44e8bc98b6d4aa7d",
"text": "We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision.",
"title": ""
},
{
"docid": "08e5e3dadfe5fa766b7941ba76d24372",
"text": "In the aging face, the lateral third of the brow ages first and ages most. Aesthetically, eyebrow shape is more significant than height and eyebrow shape is highly dependent on the level of the lateral brow complex. Surgical attempts to elevate the brow complex are usually successful medially, but often fail laterally. The \"modified lateral brow lift\" is a hybrid technique, incorporating features of an endoscopic brow lift (small hidden incisions, deep tissue fixation) and features of an open coronal brow lift (full thickness scalp excision). Sensory innervation of the scalp is preserved and secure fixation of the elevated lateral brow is achieved. Side effects and complications are minimal.",
"title": ""
},
{
"docid": "e919e6657597d61e4986f29766f142c8",
"text": "Object reconstruction from a single image - in the wild - is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.",
"title": ""
},
{
"docid": "90316f6b23e4feec08be1783fa61826c",
"text": "Mouse visual cortex is subdivided into multiple distinct, hierarchically organized areas that are interconnected through feedforward (FF) and feedback (FB) pathways. The principal synaptic targets of FF and FB axons that reciprocally interconnect primary visual cortex (V1) with the higher lateromedial extrastriate area (LM) are pyramidal cells (Pyr) and parvalbumin (PV)-expressing GABAergic interneurons. Recordings in slices of mouse visual cortex have shown that layer 2/3 Pyr cells receive excitatory monosynaptic FF and FB inputs, which are opposed by disynaptic inhibition. Most notably, inhibition is stronger in the FF than FB pathway, suggesting pathway-specific organization of feedforward inhibition (FFI). To explore the hypothesis that this difference is due to diverse pathway-specific strengths of the inputs to PV neurons we have performed subcellular Channelrhodopsin-2-assisted circuit mapping in slices of mouse visual cortex. Whole-cell patch-clamp recordings were obtained from retrobead-labeled FF(V1→LM)- and FB(LM→V1)-projecting Pyr cells, as well as from tdTomato-expressing PV neurons. The results show that the FF(V1→LM) pathway provides on average 3.7-fold stronger depolarizing input to layer 2/3 inhibitory PV neurons than to neighboring excitatory Pyr cells. In the FB(LM→V1) pathway, depolarizing inputs to layer 2/3 PV neurons and Pyr cells were balanced. Balanced inputs were also found in the FF(V1→LM) pathway to layer 5 PV neurons and Pyr cells, whereas FB(LM→V1) inputs to layer 5 were biased toward Pyr cells. The findings indicate that FFI in FF(V1→LM) and FB(LM→V1) circuits are organized in a pathway- and lamina-specific fashion.",
"title": ""
},
{
"docid": "fee191728bc0b1fbf11344961be10215",
"text": "In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. Disciplines Computer Sciences Comments Vanderwende, L., Suzuki, H., Brockett, C., & Nenkova, A., Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion, Information Processing and Management, Special Issue on Summarization Volume 43, Issue 6, 2007, doi: 10.1016/j.ipm.2007.01.023 This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/736",
"title": ""
},
{
"docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a",
"text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.",
"title": ""
},
{
"docid": "2758f69183c1702eff235707dd742791",
"text": "Semiconducting single-walled carbon nanotubes are studied in the diffusive transport regime. The peak mobility is found to scale with the square of the nanotube diameter and inversely with temperature. The maximum conductance, corrected for the contacts, is linear in the diameter and inverse temperature. These results are in good agreement with theoretical predictions for acoustic phonon scattering in combination with the unusual band structure of nanotubes. These measurements set the upper bound for the performance of nanotube transistors operating in the diffusive regime.",
"title": ""
},
{
"docid": "66e00cb4593c1bc97a10e0b80dcd6a8f",
"text": "OBJECTIVE\nTo determine the probable factors responsible for stress among undergraduate medical students.\n\n\nMETHODS\nThe qualitative descriptive study was conducted at a public-sector medical college in Islamabad, Pakistan, from January to April 2014. Self-administered open-ended questionnaires were used to collect data from first year medical students in order to study the factors associated with the new environment.\n\n\nRESULTS\nThere were 115 students in the study with a mean age of 19±6.76 years. Overall, 35(30.4%) students had mild to moderate physical problems, 20(17.4%) had severe physical problems and 60(52.2%) did not have any physical problem. Average stress score was 19.6±6.76. Major elements responsible for stress identified were environmental factors, new college environment, student abuse, tough study routines and personal factors.\n\n\nCONCLUSIONS\nMajority of undergraduate students experienced stress due to both academic and emotional factors.",
"title": ""
},
{
"docid": "41a16f3eb3ff59d34e04ffa77bf1ae86",
"text": "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems.",
"title": ""
},
{
"docid": "5a805b6f9e821b7505bccc7b70fdd557",
"text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.",
"title": ""
},
{
"docid": "68f74c4fc9d1afb00ac2ec0221654410",
"text": "Most algorithms in 3-D Computer Vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lens, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.",
"title": ""
},
{
"docid": "a3d7a6d788d6b520a4aa79343bd1b27e",
"text": "This paper explores the possibilities of analogical reasoning with vector space models. Given two pairs of words with the same relation (e.g. man:woman :: king:queen), it was proposed that the offset between one pair of the corresponding word vectors can be used to identify the unknown member of the other pair ( −−→ king − −−→ man + −−−−−→ woman = ?−−−→ queen). We argue against such “linguistic regularities” as a model for linguistic relations in vector space models and as a benchmark, and we show that the vector offset (as well as two other, better-performing methods) suffers from dependence on vector similarity.",
"title": ""
},
{
"docid": "221970fad528f2538930556dde7a0062",
"text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (B-CNN), which has shown dramatic performance gains on certain fine-grained recognition problems [15]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [12]. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computerized face detection system, it does not have the bias inherent in such a database. We demonstrate the performance of the B-CNN model beginning from an AlexNet-style network pre-trained on ImageNet. We then show results for fine-tuning using a moderate-sized and public external database, FaceScrub [17]. We also present results with additional fine-tuning on the limited training data provided by the protocol. In each case, the fine-tuned bilinear model shows substantial improvements over the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a large face database, the recently released VGG-Face model [20], can be converted into a B-CNN without any additional feature training. This B-CNN improves upon the CNN performance on the IJB-A benchmark, achieving 89.5% rank-1 recall.",
"title": ""
},
{
"docid": "9c68b87f99450e85f3c0c6093429937d",
"text": "We present a method for activity recognition that first estimates the activity performer's location and uses it with input data for activity recognition. Existing approaches directly take video frames or entire video for feature extraction and recognition, and treat the classifier as a black box. Our method first locates the activities in each input video frame by generating an activity mask using a conditional generative adversarial network (cGAN). The generated mask is appended to color channels of input images and fed into a VGG-LSTM network for activity recognition. To test our system, we produced two datasets with manually created masks, one containing Olympic sports activities and the other containing trauma resuscitation activities. Our system makes activity prediction for each video frame and achieves performance comparable to the state-of-the-art systems while simultaneously outlining the location of the activity. We show how the generated masks facilitate the learning of features that are representative of the activity rather than accidental surrounding information.",
"title": ""
}
] |
scidocsrr
|
72f899f5cc0afe1ad08184223f41111d
|
Leveraging multi-criteria customer feedback for satisfaction analysis and improved recommendations
|
[
{
"docid": "455a6fe5862e3271ac00057d1b569b11",
"text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"title": ""
}
] |
[
{
"docid": "fdd94d3d9df0171e41179336bd282bdd",
"text": "The authors propose a reinforcement-learning mechanism as a model for recurrent choice and extend it to account for skill learning. The model was inspired by recent research in neurophysiological studies of the basal ganglia and provides an integrated explanation of recurrent choice behavior and skill learning. The behavior includes effects of differential probabilities, magnitudes, variabilities, and delay of reinforcement. The model can also produce the violation of independence, preference reversals, and the goal gradient of reinforcement in maze learning. An experiment was conducted to study learning of action sequences in a multistep task. The fit of the model to the data demonstrated its ability to account for complex skill learning. The advantages of incorporating the mechanism into a larger cognitive architecture are discussed.",
"title": ""
},
{
"docid": "68884e8b00cff0a8b052190fba4d56b9",
"text": "A case of fulminant dissecting cellulitis of the scalp in a fifteen-year-old African American male is reported. The presentation was refractory to standard medical treatment such that treatment required radical subgaleal excision of the entire hair-bearing scalp. Reconstruction was in the form of split-thickness skin grafting at the level of the pericranium following several days of vacuum-assisted closure dressing to promote an acceptable wound bed for skin grafting and to ensure appropriate clearance of infection. Numerous nonsurgical modalities have been described for the treatment of dissecting cellulitis of the scalp, with surgical intervention reserved for patients refractory to medical treatment. The present paper reports a fulminant form of the disease in an atypical age of presentation, adolescence. The pathophysiology, etiology, natural history, complications and treatment options for dissecting cellulitis of the scalp are reviewed, and the authors suggest this method of treatment to be efficacious for severe presentations refractory to medical therapy.",
"title": ""
},
{
"docid": "d763198d3bfb1d30b153e13245c90c08",
"text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.",
"title": ""
},
{
"docid": "67db2885a2b8780cbfd19c1ff0cfba36",
"text": "Mechanocomputational techniques in conjunction with artificial intelligence (AI) are revolutionizing the interpretations of the crucial information from the medical data and converting it into optimized and organized information for diagnostics. It is possible due to valuable perfection in artificial intelligence, computer aided diagnostics, virtual assistant, robotic surgery, augmented reality and genome editing (based on AI) technologies. Such techniques are serving as the products for diagnosing emerging microbial or non microbial diseases. This article represents a combinatory approach of using such approaches and providing therapeutic solutions towards utilizing these techniques in disease diagnostics.",
"title": ""
},
{
"docid": "a2df7bbce7247125ef18a17d7dbb2166",
"text": "Few studies have evaluated the effectiveness of cyberbullying prevention/intervention programs. The goals of the present study were to develop a Theory of Reasoned Action (TRA)-based video program to increase cyberbullying knowledge (1) and empathy toward cyberbullying victims (2), reduce favorable attitudes toward cyberbullying (3), decrease positive injunctive (4) and descriptive norms about cyberbullying (5), and reduce cyberbullying intentions (6) and cyberbullying behavior (7). One hundred sixty-seven college students were randomly assigned to an online video cyberbullying prevention program or an assessment-only control group. Immediately following the program, attitudes and injunctive norms for all four types of cyberbullying behavior (i.e., unwanted contact, malice, deception, and public humiliation), descriptive norms for malice and public humiliation, empathy toward victims of malice and deception, and cyberbullying knowledge significantly improved in the experimental group. At one-month follow-up, malice and public humiliation behavior, favorable attitudes toward unwanted contact, deception, and public humiliation, and injunctive norms for public humiliation were significantly lower in the experimental than the control group. Cyberbullying knowledge was significantly higher in the experimental than the control group. These findings demonstrate a brief cyberbullying video is capable of improving, at one-month follow-up, cyberbullying knowledge, cyberbullying perpetration behavior, and TRA constructs known to predict cyberbullying perpetration. Considering the low cost and ease with which a video-based prevention/intervention program can be delivered, this type of approach should be considered to reduce cyberbullying.",
"title": ""
},
{
"docid": "25f777d33e66bfc8ab8516ccdd3be51d",
"text": "This paper describes the design of a low power fully-adaptive wideband, flexible reach transceiver in 16nm FinFET CMOS embedded within FPGA. The receiver utilizes a 3-stage CTLE with a segmented AGC to minimize parasitic peaking and 15-tap DFE to operate over both short and long channels. The transmitter uses a swing boosted CML driver architecture. Low noise wideband fractional N LC PLLs combined with linear active inductor based phase interpolators and high speed clocking are utilized for low jitter clock generation. The transceiver achieves >1200mVdpp TX swing with <;190 fs RJ and 5.39 ps TJ to achieve BER <; 10-15 over a 30 dB loss backplane at 32.75 Gb/s, while consuming 577 mW.",
"title": ""
},
{
"docid": "e6dcae244f91dc2d7e843d9860ac1cfd",
"text": "After Disney's Michael Eisner, Miramax's Harvey Weinstein, and Hewlett-Packard's Carly Fiorina fell from their heights of power, the business media quickly proclaimed thatthe reign of abrasive, intimidating leaders was over. However, it's premature to proclaim their extinction. Many great intimidators have done fine for a long time and continue to thrive. Their modus operandi runs counter to a lot of preconceptions about what it takes to be a good leader. They're rough, loud, and in your face. Their tactics include invading others' personal space, staging tantrums, keeping people guessing, and possessing an indisputable command of facts. But make no mistake--great intimidators are not your typical bullies. They're driven by vision, not by sheer ego or malice. Beneath their tough exteriors and sharp edges are some genuine, deep insights into human motivation and organizational behavior. Indeed, these leaders possess political intelligence, which can make the difference between paralysis and successful--if sometimes wrenching--organizational change. Like socially intelligent leaders, politically intelligent leaders are adept at sizing up others, but they notice different things. Those with social intelligence assess people's strengths and figure out how to leverage them; those with political intelligence exploit people's weaknesses and insecurities. Despite all the obvious drawbacks of working under them, great intimidators often attract the best and brightest. And their appeal goes beyond their ability to inspire high performance. Many accomplished professionals who gravitate toward these leaders want to cultivate a little \"inner intimidator\" of their own. In the author's research, quite a few individuals reported having positive relationships with intimidating leaders. In fact, some described these relationships as profoundly educational and even transformational. So before we throw out all the great intimidators, the author argues, we should stop to consider what we would lose.",
"title": ""
},
{
"docid": "e227e21d9b0523fdff82ca898fea0403",
"text": "As computer games become more complex and consumers demand more sophisticated computer controlled agents, developers are required to place a greater emphasis on the artificial intelligence aspects of their games. One source of sophisticated AI techniques is the artificial intelligence research community. This paper discusses recent efforts by our group at the University of Michigan Artificial Intelligence Lab to apply state of the art artificial intelligence techniques to computer games. Our experience developing intelligent air combat agents for DARPA training exercises, described in John Laird's lecture at the 1998 Computer Game Developer's Conference, suggested that many principles and techniques from the research community are applicable to games. A more recent project, called the Soar/Games project, has followed up on this by developing agents for computer games, including Quake II and Descent 3. The result of these two research efforts is a partially implemented design of an artificial intelligence engine for games based on well established AI systems and techniques.",
"title": ""
},
{
"docid": "3d895fa9057d76ed0488f530a18f15c4",
"text": "Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recognition for HumanComputer Interaction rely on either the Neural Networks or Hidden Markov Models (HMMs). In this paper, we compare different approaches for gesture recognition and highlight the major advantages of each. We show that gestures recognition based on the Bio-mechanical characteristic of the hand provides an intuitive approach which provides more accuracy and less complexity.",
"title": ""
},
{
"docid": "a6773662bc858664d95e3df315d11f6c",
"text": "In this paper, we examine the strength of deep learning technique for diagnosing lung cancer on medical image analysis problem. Convolutional neural networks (CNNs) models become popular among the pattern recognition and computer vision research area because of their promising outcome on generating high-level image representations. We propose a new deep learning architecture for learning high-level image representation to achieve high classification accuracy with low variance in medical image binary classification tasks. We aim to learn discriminant compact features at beginning of our deep convolutional neural network. We evaluate our model on Kaggle Data Science Bowl 2017 (KDSB17) data set, and compare it with some related works proposed in the Kaggle competition.",
"title": ""
},
{
"docid": "30c980c96931938fff76dbf6fb8aa824",
"text": "English. Emojitalianobot and EmojiWorldBot are two new online tools and digital environments for translation into emoji on Telegram, the popular instant messaging platform. Emojitalianobot is the first open and free Emoji-Italian and Emoji-English translation bot based on Unicode descriptions. The bot was designed to support the translation of Pinocchio into emoji carried out by the followers of the \"Scritture brevi\" blog on Twitter and contains a glossary with all the uses of emojis in the translation of the famous Italian novel. EmojiWorldBot, an off-spring project of Emojitalianobot, is a multilingual dictionary that uses Emoji as a pivot language from dozens of different languages. Currently the emoji-word and word-emoji functions are available for 72 languages imported from the Unicode tables and provide users with an easy search capability to map words in each of these languages to emojis, and vice versa. This paper presents the projects, the background and the main characteristics of these applications. Italiano. Emojitalianobot e EmojiWorldBot sono due applicazioni online per la traduzione in e da emoji su Telegram, la popolare piattaforma di messaggistica istantanea. Emojitalianobot è il primo bot aperto e gratuito di traduzione che contiene i dizionari Emoji-Italiano ed Emoji-Inglese basati sule descrizioni Unicode. Il bot è stato ideato per coadiuvare la traduzione di Pinocchio in emoji su Twitter da parte dei follower del blog Scritture brevi e contiene pertanto anche il glossario con tutti gli usi degli emoji nella traduzione del celebre romanzo per ragazzi. EmojiWorldBot, epigono di Emojitalianobot, è un dizionario multilingue che usa gli emoji come lingua pivot tra dozzine di lingue differenti. Attualmente le funzioni emoji-parola e parola-emoji sono disponibili per 72 lingue importate dalle tabelle Unicode e forniscono agli utenti delle semplici funzioni di ricerca per trovare le corrispondenze in emoji delle parole e viceversa per ciascuna di queste lingue. Questo contributo presenta i progetti, il background e le principali caratteristiche di queste",
"title": ""
},
{
"docid": "95349f6ccc3ca99154f672e65894ea41",
"text": "Banks are increasingly using secondary loan sales to manage credit risk and diversify their portfolios. However, loan sales fundamentally alter the lending process by separating loan origination from servicing and funding. How do banks reduce agency problems that arise from selling loans? How do loan sales affect lending relationships and borrower’s access to loans? Our analysis suggests that sold loans are structured to reduce buyer-seller agency problems. Sold loans contain additional, more restrictive covenants, particularly when agency problems between buyers and sellers are likely to be more severe, such as when low reputation lenders originate the loan. When loans are sold, borrowers benefit from increased access to private debt capital, both in the present and in the future. This potentially balances the costs of more restrictive covenants and borrowing from additional lenders. Contrary to concerns that that loan selling negatively impacts lending relationships, borrowers whose loans are sold are more likely to retain their lending relationships. We also provide large-sample evidence on the characteristics of borrowers and lenders that affect salability and find evidence consistent with information asymmetry placing limitations on loan selling. * Drucker is from the Graduate School of Business, Columbia University. Email: sd2281@columbia.edu. Phone: 212-854-4151. Puri is from the Fuqua School of Business, Duke University, and NBER. Email: mpuri@duke.edu. Phone: 919-660-7657. We thank Ken Ayotte, Chris Mayer, Anil Shivdasani, and seminar participants at Ohio University and the Washington University in St. Louis Corporate Finance Conference for helpful comments. We acknowledge funding from the FDIC Center for Financial Research.",
"title": ""
},
{
"docid": "8fec42521158443ba03d43c6f59ecddb",
"text": "Conditional Restricted Boltzmann Machines (CRBMs) are rich probabilistic models that have recently been applied to a wide range of problems, including collaborative filtering, classification, and modeling motion capture data. While much progress has been made in training non-conditional RBMs, these algorithms are not applicable to conditional models and there has been almost no work on training and generating predictions from conditional RBMs for structured output problems. We first argue that standard Contrastive Divergence-based learning may not be suitable for training CRBMs. We then identify two distinct types of structured output prediction problems and propose an improved learning algorithm for each. The first problem type is one where the output space has arbitrary structure but the set of likely output configurations is relatively small, such as in multi-label classification. The second problem is one where the output space is arbitrarily structured but where the output space variability is much greater, such as in image denoising or pixel labeling. We show that the new learning algorithms can work much better than Contrastive Divergence on both types of problems.",
"title": ""
},
{
"docid": "46df34ed9fb6abcc0e6250972fca1faa",
"text": "Reliable, scalable and secured framework for predicting Heart diseases by mining big data is designed. Components of Apache Hadoop are used for processing of big data used for prediction. For increasing the performance, scalability, and reliability Hadoop clusters are deployed on Google Cloud Storage. Mapreduce based Classification via clustering method is proposed for efficient classification of instances using reduced attributes. Mapreduce based C 4.5 decision tree algorithm is improved and implemented to classify the instances. Datasets are analyzed on WEKA (Waikato Environment for Knowledge Analysis) and Hadoop. Classification via clustering method performs classification with 98.5% accuracy on WEKA with reduced attributes. On Mapreduce paradigm using this approach execution time is improved. With clustered instances 49 nodes of decision tree are reduced to 32 and execution time of Mapreduce program is reduced from 113 seconds to 84 seconds. Mapreduce based decision trees present classification of instances more accurately as compared to WEKA based decision trees.",
"title": ""
},
{
"docid": "0ed9c61670394b46c657593d71aa25e4",
"text": "We developed a novel computational framework to predict the perceived trustworthiness of host profile texts in the context of online lodging marketplaces. To achieve this goal, we developed a dataset of 4,180 Airbnb host profiles annotated with perceived trustworthiness. To the best of our knowledge, the dataset along with our models allow for the first computational evaluation of perceived trustworthiness of textual profiles, which are ubiquitous in online peer-to-peer marketplaces. We provide insights into the linguistic factors that contribute to higher and lower perceived trustworthiness for profiles of different lengths.",
"title": ""
},
{
"docid": "ea739d96ee0558fb23f0a5a020b92822",
"text": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.",
"title": ""
},
{
"docid": "0102748c7f9969fb53a3b5ee76b6eefe",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
},
{
"docid": "f8d06c65acdbec0a41fe49fc4e7aef09",
"text": "We present an exhaustive review of research on automatic classification of sounds from musical instruments. Two different but complementary approaches are examined, the perceptual approach and the taxonomic approach. The former is targeted to derive perceptual similarity functions in order to use them for timbre clustering and for searching and retrieving sounds by timbral similarity. The latter is targeted to derive indexes for labeling sounds after cultureor user-biased taxonomies. We review the relevant features that have been used in the two areas and then we present and discuss different techniques for similarity-based clustering of sounds and for classification into pre-defined instrumental categories.",
"title": ""
},
{
"docid": "de668bf99b307f96580e294f7e58afcf",
"text": "Sliding window is one direct way to extend a successful recognition system to handle the more challenging detection problem. While action recognition decides only whether or not an action is present in a pre-segmented video sequence, action detection identifies the time interval where the action occurred in an unsegmented video stream. Sliding window approaches for action detection can however be slow as they maximize a classifier score over all possible sub-intervals. Even though new schemes utilize dynamic programming to speed up the search for the optimal sub-interval, they require offline processing on the whole video sequence. In this paper, we propose a novel approach for online action detection based on 3D skeleton sequences extracted from depth data. It identifies the sub-interval with the maximum classifier score in linear time. Furthermore, it is invariant to temporal scale variations and is suitable for real-time applications with low latency.",
"title": ""
}
] |
scidocsrr
|
bef4913ad67d8edf081e9902e97733d1
|
Sentence Simplification for Semantic Role Labeling
|
[
{
"docid": "5c6bdb80f470d7b9b0e2acd57cb23295",
"text": "We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.",
"title": ""
},
{
"docid": "06413e71fbbe809ee2ffbdb31dc8fe59",
"text": "This paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input, generally a syntactic parse tree, has yet to be fully exploited. We propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performed. We further show that different features are needed for different subtasks. Finally, we show that by using a Maximum Entropy classifier and fewer features, we achieved results comparable with the best previously reported results obtained with SVM models. We believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateof-the-art in semantic analysis.",
"title": ""
}
] |
[
{
"docid": "8733daeee2dd85345ce115cb1366f4b2",
"text": "We propose an interactive model, RuleViz, for visualizing the entire process of knowledge discovery and data mining. The model consists of ve components according to the main ingredients of the knowledge discovery process: original data visualization, visual data reduction, visual data preprocess, visual rule discovery, and rule visualization. The RuleViz model for visualizing the process of knowledge discovery is introduced and each component is discussed. Two aspects are emphasized, human-machine interaction and process visualization. The interaction helps the KDD system navigate through the enormous search spaces and recognize the intentions of the user, and the visualization of the KDD process helps users gain better insight into the multidimensional data, understand the intermediate results, and interpret the discovered patterns. According to the RuleViz model, we implement an interactive system, CViz, which exploits \\parallel coordinates\" technique to visualize the process of rule induction. The original data is visualized on the parallel coordinates, and can be interactively reduced both horizontally and vertically. Three approaches for discretizing numerical attributes are provided in the visual data preprocessing. CViz learns classi cation rules on the basis of a rule induction algorithm and presents the result as the algorithm proceeds. The discovered rules are nally visualized on the parallel coordinates with each rule being displayed as a directed \\polygon\", and the rule accuracy and quality are used to render the \\polygons\" and control the choice of rules to be displayed to avoid clutter. The CViz system has been experimented with the UCI data sets and synthesis data sets, and the results demonstrate that the RuleViz model and the implemented visualization system are useful and helpful for understanding the process of knowledge discovery and interpreting the nal results.",
"title": ""
},
{
"docid": "89b8f3b7efa011065cf28647b9984f4d",
"text": "Due to the abundance of 2D product images from the internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications. Recent works have addressed the single-view depth estimation problem by utilizing convolutional neural networks. In this paper, we show that exploring symmetry information, which is ubiquitous in man made objects, can significantly boost the quality of such depth predictions. Specifically, we propose a new convolutional neural network architecture to first estimate dense symmetric correspondences in a product image and then propose an optimization which utilizes this information explicitly to significantly improve the quality of single-view depth estimations. We have evaluated our approach extensively, and experimental results show that this approach outperforms state-of-the-art depth estimation techniques.",
"title": ""
},
{
"docid": "5183794d8bef2d8f2ee4048d75a2bd3c",
"text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"title": ""
},
{
"docid": "d6df3b864a18b81930a546f273b4c008",
"text": "Farmer needs alternatives for weed control due to the desire to reduce chemicals used in farming. However, conventional mechanical cultivation cannot selectively remove weeds and there are no selective herbicides for some weed situation. Since hand labor is costly, an automated weed control system could be feasible. A robotic weed control system can also reduce or eliminate the need for chemicals. Many attempts have been made to develop efficient algorithms for recognition and classification. Currently research is going on for developing new machine vision algorithms for automatic recognition and classification of many divers object groups. In this paper an algorithm is developed for automatic spray control system. The algorithm is based on erosion followed by dilation segmentation algorithm. This algorithm can detect weeds and also classify it. Currently the algorithm is tested on two types of weeds i.e. broad and narrow. The developed algorithm has been tested on these two types of weeds in the lab, which gives a very reliable performance. The algorithm is applied on 240 images stored in a database in the lab, of which 100 images were taken from broad leaf weeds and 100 were taken from narrow leaf weeds, and the remaining 40 were taken from no or little weeds. The result showed over 89% results",
"title": ""
},
{
"docid": "28823f624c037a8b54e9906c3b443f38",
"text": "Aging is associated with progressive losses in function across multiple systems, including sensation, cognition, memory, motor control, and affect. The traditional view has been that functional decline in aging is unavoidable because it is a direct consequence of brain machinery wearing down over time. In recent years, an alternative perspective has emerged, which elaborates on this traditional view of age-related functional decline. This new viewpoint--based upon decades of research in neuroscience, experimental psychology, and other related fields--argues that as people age, brain plasticity processes with negative consequences begin to dominate brain functioning. Four core factors--reduced schedules of brain activity, noisy processing, weakened neuromodulatory control, and negative learning--interact to create a self-reinforcing downward spiral of degraded brain function in older adults. This downward spiral might begin from reduced brain activity due to behavioral change, from a loss in brain function driven by aging brain machinery, or more likely from both. In aggregate, these interrelated factors promote plastic changes in the brain that result in age-related functional decline. This new viewpoint on the root causes of functional decline immediately suggests a remedial approach. Studies of adult brain plasticity have shown that substantial improvement in function and/or recovery from losses in sensation, cognition, memory, motor control, and affect should be possible, using appropriately designed behavioral training paradigms. Driving brain plasticity with positive outcomes requires engaging older adults in demanding sensory, cognitive, and motor activities on an intensive basis, in a behavioral context designed to re-engage and strengthen the neuromodulatory systems that control learning in adults, with the goal of increasing the fidelity, reliability, and power of cortical representations. Such a training program would serve a substantial unmet need in aging adults. Current treatments directed at age-related functional losses are limited in important ways. Pharmacological therapies can target only a limited number of the many changes believed to underlie functional decline. Behavioral approaches focus on teaching specific strategies to aid higher order cognitive functions, and do not usually aspire to fundamentally change brain function. A brain-plasticity-based training program would potentially be applicable to all aging adults with the promise of improving their operational capabilities. We have constructed such a brain-plasticity-based training program and conducted an initial randomized controlled pilot study to evaluate the feasibility of its use by older adults. A main objective of this initial study was to estimate the effect size on standardized neuropsychological measures of memory. We found that older adults could learn the training program quickly, and could use it entirely unsupervised for the majority of the time required. Pre- and posttesting documented a significant improvement in memory within the training group (effect size 0.41, p<0.0005), with no significant within-group changes in a time-matched computer using active control group, or in a no-contact control group. Thus, a brain-plasticity-based intervention targeting normal age-related cognitive decline may potentially offer benefit to a broad population of older adults.",
"title": ""
},
{
"docid": "04c029380ae73b75388ab02f901fda7d",
"text": "We present a novel method to solve image analogy problems [3]: it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set. Therefore, we call the method Conditional Analogy Generative Adversarial Network (CAGAN), as it is based on adversarial training and employs deep convolutional neural networks. An especially interesting application of that technique is automatic swapping of clothing on fashion model photos. Our work has the following contributions. First, the definition of the end-to-end trainable CAGAN architecture, which implicitly learns segmentation masks without expensive supervised labeling data. Second, experimental results show plausible segmentation masks and often convincing swapped images, given the target article. Finally, we discuss the next steps for that technique: neural network architecture improvements and more advanced applications.",
"title": ""
},
{
"docid": "548f43f2193cffc6711d8a15c00e8c3d",
"text": "Dither signals provide an effective way to compensate for nonlinearities in control systems. The seminal works by Zames and Shneydor, and more recently, by Mossaheb, present rigorous tools for systematic design of dithered systems. Their results rely, however, on a Lipschitz assumption relating to nonlinearity, and thus, do not cover important applications with discontinuities. This paper presents initial results on how to analyze and design dither in nonsmooth systems. In particular, it is shown that a dithered relay feedback system can be approximated by a smoothed system. Guidelines are given for tuning the amplitude and the period time of the dither signal, in order to stabilize the nonsmooth system.",
"title": ""
},
{
"docid": "cd5a267c1dac92e68ba677c4a2e06422",
"text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.",
"title": ""
},
{
"docid": "97a9f11cf142c251364da09a264026ab",
"text": "We consider techniques for permuting a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various criteria for this and consider their implementation as computer codes. We then indicate several cases where such a permutation can be useful. These include the solution of sparse equations by a direct method and by an iterative technique. We also consider its use in generating a preconditioner for an iterative method. We see that the effect of these reorderings can be dramatic although the best a priori strategy is by no means clear.",
"title": ""
},
{
"docid": "073f75cac0639b7ef266a8cb258cc283",
"text": "Present research was performed to generate normative values using Two-Point Discrimination test (TPD) for skin areas of dominant hand. Various studies revealed that TPD test demonstrate the integrity of tactile stimulation. In this study the test was executed on 270 students of art & design, medical and literary backgrounds of age between 20-23 years which were randomly selected from different colleges of Princess Noura Bint Abdulrahman University in Riyadh. TPD values were determined for distal palmar of the hand and tip of middle finger of their dominant hand parallel to the median nerve, which innervate the area of the hand and perpendicular on the fingertips. The examiner addressed the hand skills or any talent and visual acuity scale of the participants and used Michigan hand outcomes questionnaire (MHQ) to measure their perception of hand function, pain, satisfaction and work performance which was focused on all daily living activities of hand (ADL). To meet objectives of the study, test parameters were considered viz. ADL of one and both hands, overall function of hand, normal work; TPD values of distal palm of hand and tip of middle finger. Normative values of TPD test were obtained from range 2-7 mm among participants in distal palm of dominant hand with 4 mm average and 2-3 mm in the tip of middle finger with 2.6 mm average. Obtained normative values were analyzed statistically and compared with referred values. A plot between average TPD values shown that discriminatory sensations were found better in art and design college students. To measure the visual acuity values visual acuity test 6/6 vision was performed on participated students which represented the normal vision. Plot between values of TPD and visual acuity suggested that decrease level of visual acuity of students has better normative values which were obtained 3.4 mm of distal palm of dominant hand and 2.4 mm on the tip of the long finger. A significant relation between age of students and TPD values were estimated p<0.015 and p>0.01 for visual acuity. This study estimated that fingertips were the most sensitive part than palm of hand (p<0.01).",
"title": ""
},
{
"docid": "f69d669235d54858eb318b53cdadcb47",
"text": "We present a complete vision guided robot system for model based 3D pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a Computer Aided Design (CAD) model of the object. The pose is refined using a fully projective formulation [ACB98] of Lowe’s model based pose estimation algorithm [Low91, Low87]. The estimated pose is transferred to robot coordinate system utilizing the handeye and camera calibration parameters, which allows the robot to pick the object. Our system outperforms conventional systems using 2D sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique. International Journal of Robotics Research This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2009 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "e36eeb99b8d816d77b825daab4839b41",
"text": "3T MRI has become increasingly available for better imaging of interosseous ligaments, TFCC, and avascular necrosis compared with 1.5T MRI. This study assesses the sensitivity and specificity of 3T MRI compared with arthroscopy as the gold standard. Eighteen patients were examined with 3T MRI using coronal T1-TSE; PD-FS; and coronal, sagittal, and axial contrast-enhanced T1-FFE-FS sequences. Two musculoskeletal radiologists evaluated the images independently. Patients underwent diagnostic arthroscopy. The classifications of the cartilage lesions showed good correlations with the arthroscopy findings (κ = 0.8–0.9). In contrast to the arthroscopy, cartilage of the distal carpal row was very good and could be evaluated in all patients on MRI. The sensitivity for the TFCC lesion was 83%, and the specificity was 42% (radiologist 1) and 63% (radiologist 2). For the ligament lesions, the sensitivity and specificity were 75 and 100%, respectively, with a high interobserver agreement (κ = 0.8–0.9). 3T MRI proved to be of good value in diagnosing cartilage lesions, especially in the distal carpal row, whereas wrist arthroscopy provided therapeutic options. When evaluating the surgical therapeutical options, 3T MRI is a good diagnostic tool for pre-operatively evaluating the cartilage of the distal carpal row.",
"title": ""
},
{
"docid": "d7fe5cf79f7c88d623713976cc810493",
"text": "The battery is the most common method of energy storage in stand alone solar systems; the most popular being the valve regulated lead acid battery (VRLA) due to its low cost and ease of availability. Photovoltaics are not an ideal source for charging batteries as their output is heavily dependent on weather conditions. Therefore, when batteries are used in photovoltaic systems, the performance characteristics differ significantly from batteries used in more traditional applications and the battery life is usually shortened. In conditions of varying solar radiation and load profile the battery may experience a low state of charge (SOC). A low SOC for extended periods of time will cause increased sulphation, which severely reduces the life of the battery. Typically, steps are carried out to protect the battery and to charge the battery more effectively. Such methods include intermittent charging (IC), three stage charging (TSC) and interrupted charge control (ICC), among others. This paper quantifies the effectiveness of these three battery charging algorithms and evaluates their ability to maintain the battery at a high state of charge. The measurement setup is comprised of a solar simulator, which replicates the output of a large 50 W photovoltaic panel using a low power cell. Repeatable load and solar radiation profiles and temperature control are implemented using LabView so that identical operating conditions can be set up to compare the three battery charging systems.",
"title": ""
},
{
"docid": "ff53accc7e5342827104bf96a8d0e134",
"text": "The vision of a Smart Electric Grid relies critically on substantial advances in intelligent decentralized control mechanisms. We propose a novel class of autonomous broker agents for retail electricity trading that can operate in a wide range of Smart Electricity Markets, and that are capable of deriving long-term, profit-maximizing policies. Our brokers use Reinforcement Learning with function approximation, they can accommodate arbitrary economic signals from their environments, and they learn efficiently over the large state spaces resulting from these signals. We show how feature selection and regularization can be leveraged to automatically optimize brokers for particular market conditions, and demonstrate the performance of our design in extensive experiments using real-world energy market data.",
"title": ""
},
{
"docid": "35f439b86c07f426fd127823a45ffacf",
"text": "The paper concentrates on the fundamental coordination problem that requires a network of agents to achieve a specific but arbitrary formation shape. A new technique based on complex Laplacian is introduced to address the problems of which formation shapes specified by inter-agent relative positions can be formed and how they can be achieved with distributed control ensuring global stability. Concerning the first question, we show that all similar formations subject to only shape constraints are those that lie in the null space of a complex Laplacian satisfying certain rank condition and that a formation shape can be realized almost surely if and only if the graph modeling the inter-agent specification of the formation shape is 2-rooted. Concerning the second question, a distributed and linear control law is developed based on the complex Laplacian specifying the target formation shape, and provable existence conditions of stabilizing gains to assign the eigenvalues of the closed-loop system at desired locations are given. Moreover, we show how the formation shape control law is extended to achieve a rigid formation if a subset of knowledgable agents knowing the desired formation size scales the formation while the rest agents do not need to re-design and change their control laws.",
"title": ""
},
{
"docid": "d10dc295173202332700918cab02ac2b",
"text": "Markov logic networks (MLNs) have proven to be useful tools for reasoning about uncertainty in complex knowledge bases. In this paper, we extend MLNs with numerical constraints and present an efficient implementation in terms of a cutting plane method. This extension is useful for reasoning over uncertain temporal data. To show the applicability of this extension, we enrich log-linear description logics (DLs) with concrete domains (datatypes). Thereby, allowing to reason over weighted DLs with datatypes. Moreover, we use the resulting formalism to reason about temporal assertions in DBpedia, thus illustrating its practical use.",
"title": ""
},
{
"docid": "ecf7446713dc92394c16241aa31a8dba",
"text": "Accelerated graphics cards, or Graphics Processing Units (GPUs), have become ubiquitous in recent years. On the right kinds of problems, GPUs greatly surpass CPUs in terms of raw performance. However, because they are difficult to program, GPUs are used only for a narrow class of special-purpose applications; the raw processing power made available by GPUs is unused most of the time.\n This paper presents an extension to a Java JIT compiler that executes suitable code on the GPU instead of the CPU. Both static and dynamic features are used to decide whether it is feasible and beneficial to off-load a piece of code on the GPU. The paper presents a cost model that balances the speedup available from the GPU against the cost of transferring input and output data between main memory and GPU memory. The cost model is parameterized so that it can be applied to different hardware combinations. The paper also presents ways to overcome several obstacles to parallelization inherent in the design of the Java bytecode language: unstructured control flow, the lack of multi-dimensional arrays, the precise exception semantics, and the proliferation of indirect references.",
"title": ""
},
{
"docid": "64702593fd9271b7caa4178594f26469",
"text": "Microsoft operates the Azure SQL Database (ASD) cloud service, one of the dominant relational cloud database services in the market today. To aid the academic community in their research on designing and efficiently operating cloud database services, Microsoft is introducing the release of production-level telemetry traces from the ASD service. This telemetry data set provides, over a wide set of important hardware resources and counters, the consumption level of each customer database replica. The first release will be a multi-month time-series data set that includes the full cluster traces from two different ASD global regions.",
"title": ""
},
{
"docid": "979c6c841b3435c3a8995be7b506f6ea",
"text": "The immune response goes haywire during sepsis, a deadly condition triggered by infection. Richard S. Hotchkiss and his colleagues take the focus off of the prevailing view that the key aspect of this response is an exuberant inflammatory reaction. They assess recent human studies bolstering the notion that immunosuppression is also a major contributor to the disease. Many people with sepsis succumb to cardiac dysfunction, a process examined by Peter Ward. He showcases the factors that cause cardiomyocyte contractility to wane during the disease.",
"title": ""
}
] |
scidocsrr
|
c9d8115accf1ae7dfcc2b6f7d144df50
|
Gestures for industry Intuitive human-robot communication from human observation
|
[
{
"docid": "0cfa125deea633dd978478b0dd7d807d",
"text": "The purpose of this paper is to review research pertaining to the limitations and advantages of User-Robot Interaction for Unmanned-Vehicles (UVs) swarming. We identify and discuss results showing technologies that mitigate the observed problems such as specialized level of automation and human factors in controlling a swarm of mobile agents. In the paper, we first present an overview of definitions and important terms of swarm robotics and its application in multiple UVs systems. Then, the discussion of human-swam interactions in controlling of multiple vehicles is provided with consideration of varies limitations and design guidelines. Finally, we discussed challenges and potential research aspects in the area of Human-robot interaction design in large swarm of UVs and robots.",
"title": ""
},
{
"docid": "60c93fe8e910ca03f96e35cfaac2c748",
"text": "Mataric builds on two inspirations from biology in designing a humanoid robot motor control system: spinal fields and mirror neurons. Spinal fields code primitive motor behaviors that serve as building blocks for more complex behaviors in the organism. As such they somewhat resemble Mataric's \"basis behaviors\" which act as similar building blocks in complex robot behavior patterns. Mirror neurons are brain cell constructs that correspond to both perception and activation of motor activity, playing a central role in imitation.",
"title": ""
}
] |
[
{
"docid": "347ac68f3d7e95d4ab901146c2c2c919",
"text": "In this paper, we present a deep reinforcement learning (RL) framework for iterative dialog policy optimization in end-to-end task-oriented dialog systems. Popular approaches in learning dialog policy with RL include letting a dialog agent to learn against a user simulator. Building a reliable user simulator, however, is not trivial, often as difficult as building a good dialog agent. We address this challenge by jointly optimizing the dialog agent and the user simulator with deep RL by simulating dialogs between the two agents. We first bootstrap a basic dialog agent and a basic user simulator by learning directly from dialog corpora with supervised training. We then improve them further by letting the two agents to conduct task-oriented dialogs and iteratively optimizing their policies with deep RL. Both the dialog agent and the user simulator are designed with neural network models that can be trained end-to-end. Our experiment results show that the proposed method leads to promising improvements on task success rate and total task reward comparing to supervised training and single-agent RL training baseline models.",
"title": ""
},
{
"docid": "17bd801e028d168795620b590bb8cfce",
"text": "Video shot boundary detection (SBD) is the first and essential step for content-based video management and structural analysis. Great efforts have been paid to develop SBD algorithms for years. However, the high computational cost in the SBD becomes a block for further applications such as video indexing, browsing, retrieval, and representation. Motivated by the requirement of the real-time interactive applications, a unified fast SBD scheme is proposed in this paper. We adopted a candidate segment selection and singular value decomposition (SVD) to speed up the SBD. Initially, the positions of the shot boundaries and lengths of gradual transitions are predicted using adaptive thresholds and most non-boundary frames are discarded at the same time. Only the candidate segments that may contain the shot boundaries are preserved for further detection. Then, for all frames in each candidate segment, their color histograms in the hue-saturation-value) space are extracted, forming a frame-feature matrix. The SVD is then performed on the frame-feature matrices of all candidate segments to reduce the feature dimension. The refined feature vector of each frame in the candidate segments is obtained as a new metric for boundary detection. Finally, cut and gradual transitions are identified using our pattern matching method based on a new similarity measurement. Experiments on TRECVID 2001 test data and other video materials show that the proposed scheme can achieve a high detection speed and excellent accuracy compared with recent SBD schemes.",
"title": ""
},
{
"docid": "488d55fbf55e9a7eb6e1122ac262bc35",
"text": "Adult stem cells provide replacement and repair descendants for normal turnover or injured tissues. These cells have been isolated and expanded in culture, and their use for therapeutic strategies requires technologies not yet perfected. In the 1970s, the embryonic chick limb bud mesenchymal cell culture system provided data on the differentiation of cartilage, bone, and muscle. In the 1980s, we used this limb bud cell system as an assay for the purification of inductive factors in bone. In the 1990s, we used the expertise gained with embryonic mesenchymal progenitor cells in culture to develop the technology for isolating, expanding, and preserving the stem cell capacity of adult bone marrow-derived mesenchymal stem cells (MSCs). The 1990s brought us into the new field of tissue engineering, where we used MSCs with site-specific delivery vehicles to repair cartilage, bone, tendon, marrow stroma, muscle, and other connective tissues. In the beginning of the 21st century, we have made substantial advances: the most important is the development of a cell-coating technology, called painting, that allows us to introduce informational proteins to the outer surface of cells. These paints can serve as targeting addresses to specifically dock MSCs or other reparative cells to unique tissue addresses. The scientific and clinical challenge remains: to perfect cell-based tissue-engineering protocols to utilize the body's own rejuvenation capabilities by managing surgical implantations of scaffolds, bioactive factors, and reparative cells to regenerate damaged or diseased skeletal tissues.",
"title": ""
},
{
"docid": "f5c5e64f12a54780ef47355f38166a91",
"text": "It is well known that clothing fashion is a distinctive and often habitual trend in the style in which a person dresses. Clothing fashions are usually expressed with visual stimuli such as style, color, and texture. However, it is not clear which visual stimulus places higher/lower influence on the updating of clothing fashion. In this study, computer vision and machine learning techniques are employed to analyze the influence of different visual stimuli on clothing-fashion updates. Specifically, a classification-based model is proposed to quantify the influence of different visual stimuli, in which each visual stimulus’s influence is quantified by its corresponding accuracy in fashion classification. Experimental results demonstrate that, on clothing-fashion updates, the style holds a higher influence than the color, and the color holds a higher influence than the texture.",
"title": ""
},
{
"docid": "a05d1bfa5fb61c68c27605423b81c523",
"text": "This paper emphasis on hiding the information with all its probabilities in the Cloud Computing. We proposed the execution of steganography through clustering and implemented through K Strange Point clustering algorithm. There is a comparison done between the K Means Clustering Algorithm and our obtained result of K Strange Point Clustering Algorithm. We asset that our proposed methodology proved that it works better with the K Strange Points Clustering Algorithm. To hide data within the covering medium we use LSB algorithm. We finally proposed an enhanced scheme for best hiding capacity.",
"title": ""
},
{
"docid": "084b42f88a1cbd9ff9db2151aaf59465",
"text": "We present the first approach for 3D point-cloud to image translation based on conditional Generative Adversarial Networks (cGAN). The model handles multi-modal information sources from different domains, i.e. raw point-sets and images. The generator is capable of processing three conditions, whereas the point-cloud is encoded as raw point-set and camera projection. An image background patch is used as constraint to bias environmental texturing. A global approximation function within the generator is directly applied on the point-cloud (Point-Net). Hence, the representative learning model incorporates global 3D characteristics directly at the latent feature space. Conditions are used to bias the background and the viewpoint of the generated image. This opens up new ways in augmenting or texturing 3D data to aim the generation of fully individual images. We successfully evaluated our method on the KITTI and SunRGBD dataset with an outstanding object detection inception score.",
"title": ""
},
{
"docid": "5e14a79e4634445291d67c3d7f4ea617",
"text": "A a new type of word-of-mouth information, online consumer product review is an emerging market phenomenon that is playing an increasingly important role in consumers’ purchase decisions. This paper argues that online consumer review, a type of product information created by users based on personal usage experience, can serve as a new element in the marketing communications mix and work as free “sales assistants” to help consumers identify the products that best match their idiosyncratic usage conditions. This paper develops a normative model to address several important strategic issues related to consumer reviews. First, we show when and how the seller should adjust its own marketing communication strategy in response to consumer reviews. Our results reveal that if the review information is sufficiently informative, the two types of product information, i.e., the seller-created product attribute information and buyer-created review information, will interact with each other. For example, when the product cost is low and/or there are sufficient expert (more sophisticated) product users, the two types of information are complements, and the seller’s best response is to increase the amount of product attribute information conveyed via its marketing communications after the reviews become available. However, when the product cost is high and there are sufficient novice (less sophisticated) product users, the two types of information are substitutes, and the seller’s best response is to reduce the amount of product attribute information it offers, even if it is cost-free to provide such information. We also derive precise conditions under which the seller can increase its profit by adopting a proactive strategy, i.e., adjusting its marketing strategies even before consumer reviews become available. Second, we identify product/market conditions under which the seller benefits from facilitating such buyer-created information (e.g., by allowing consumers to post user-based product reviews on the seller’s website). Finally, we illustrate the importance of the timing of the introduction of consumer reviews available as a strategic variable and show that delaying the availability of consumer reviews for a given product can be beneficial if the number of expert (more sophisticated) product users is relatively large and cost of the product is low.",
"title": ""
},
{
"docid": "b499ded5996db169e65282dd8b65f289",
"text": "For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a",
"title": ""
},
{
"docid": "f2e9083262c2680de3cf756e7960074a",
"text": "Social commerce is a new development in e-commerce generated by the use of social media to empower customers to interact on the Internet. The recent advancements in ICTs and the emergence of Web 2.0 technologies along with the popularity of social media and social networking sites have seen the development of new social platforms. These platforms facilitate the use of social commerce. Drawing on literature from marketing and information systems (IS) the author proposes a new model to develop our underocial media ocial networking site rust LS-SEM standing of social commerce using a PLS-SEM methodology to test the model. Results show that Web 2.0 applications are attracting individuals to have interactions as well as generate content on the Internet. Consumers use social commerce constructs for these activities, which in turn increase the level of trust and intention to buy. Implications, limitations, discussion, and future research directions are discussed at the end of the paper. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7577808bfd2deb179aba902ad09d6108",
"text": "In this paper, we summarize a novel approach to robotic rehabilitation that capitalizes on the benefits of patient intent and real-time assessment of impairment. Specifically, an upper-limb, physical human-robot interface (the MAHI EXO-II robotic exoskeleton) is augmented with a non-invasive brain-machine interface (BMI) to include the patient in the control loop, thereby making the therapy `active' and engaging patients across a broad spectrum of impairment severity in the rehabilitation tasks. Robotic measures of motor impairment are derived from real-time sensor data from the MAHI EXO-II and the BMI. These measures can be validated through correlation with widely used clinical measures and used to drive patient-specific therapy sessions adapted to the capabilities of the individual, with the MAHI EXO-II providing assistance or challenging the participant as appropriate to maximize rehabilitation outcomes. This approach to robotic rehabilitation takes a step towards the seamless integration of BMIs and intelligent exoskeletons to create systems that can monitor and interface with brain activity and movement. Such systems will enable more focused study of various issues in development of devices and rehabilitation strategies, including interpretation of measurement data from a variety of sources, exploration of hypotheses regarding large scale brain function during robotic rehabilitation, and optimization of device design and training programs for restoring upper limb function after stroke.",
"title": ""
},
{
"docid": "5f70d96454e4a6b8d2ce63bc73c0765f",
"text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.",
"title": ""
},
{
"docid": "bb4ae01ca527c74be94abcb5ae3dd9f0",
"text": "The practical deployment of massive multiple-input multiple-output (MIMO) in the future fifth generation (5G) wireless communication systems is challenging due to its high-hardware cost and power consumption. One promising solution to address this challenge is to adopt the low-resolution analog-to-digital converter (ADC) architecture. However, the practical implementation of such architecture is challenging due to the required complex signal processing to compensate the coarse quantization caused by low-resolution ADCs. Therefore, few high-resolution ADCs are reserved in the recently proposed mixed-ADC architecture to enable low-complexity transceiver algorithms. In contrast to previous works over Rayleigh fading channels, we investigate the performance of mixed-ADC massive MIMO systems over the Rician fading channel, which is more general for the 5G scenarios like Internet of Things. Specially, novel closed-form approximate expressions for the uplink achievable rate are derived for both cases of perfect and imperfect channel state information (CSI). With the increasing Rician $K$ -factor, the derived results show that the achievable rate will converge to a fixed value. We also obtain the power-scaling law that the transmit power of each user can be scaled down proportionally to the inverse of the number of base station (BS) antennas for both perfect and imperfect CSI. Moreover, we reveal the tradeoff between the achievable rate and the energy efficiency with respect to key system parameters, including the quantization bits, number of BS antennas, Rician $K$ -factor, user transmit power, and CSI quality. Finally, numerical results are provided to show that the mixed-ADC architecture can achieve a better energy-rate tradeoff compared with the ideal infinite-resolution and low-resolution ADC architectures.",
"title": ""
},
{
"docid": "163dbb128f1205f5e31bb3db5c0c17c8",
"text": "This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson’s global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting online-purchasing behaviour: (1) general clickstream behaviour at the level of the visit, (2) more detailed clickstream information, (3) customer demographics, and (4) historical purchase behaviour. The results show that predictors from all four categories are retained in the final (best subset) solution indicating that clickstream behaviour is important when determining the tendency to buy. We clearly indicate the contribution in predictive power of variables that were never used before in online purchasing studies. Detailed clickstream variables are the most important ones in classifying customers according to their online purchase behaviour. In doing so, we are able to highlight the advantage of e-commerce retailers of being able to capture an elaborate list of customer information.",
"title": ""
},
{
"docid": "c0a1b48688cd0269b787a17fa5d15eda",
"text": "Animating human character has become an active research area in computer graphics. It is really important for development of virtual environment applications such as computer games and virtual reality. One of the popular methods to animate the character is by using motion graph. Since motion graph is the main focus of this research, we investigate the preliminary work of motion graph and discuss about the main components of motion graph like distance metrics and motion transition. These two components will be taken into consideration during the process of development of motion graph. In this paper, we will also present a general framework and future plan of this study.",
"title": ""
},
{
"docid": "af486334ab8cae89d9d8c1c17526d478",
"text": "Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.",
"title": ""
},
{
"docid": "20d5147f67fccce9ba3290793bf4d9b5",
"text": "Correspondence: David E Vance NB 456, 1701 University Boulevard, University of Alabama at Birmingham, Birmingham, AL 35294-1210, USA Tel +1 205 934 7589 Fax +1 205 996 7183 Email devance@uab.edu Abstract: The ability to critically evaluate the merits of a quantitative design research article is a necessary skill for practitioners and researchers of all disciplines, including nursing, in order to judge the integrity and usefulness of the evidence and conclusions made in an article. In general, this skill is automatic for many practitioners and researchers who already possess a good working knowledge of research methodology, including: hypothesis development, sampling techniques, study design, testing procedures and instrumentation, data collection and data management, statistics, and interpretation of findings. For graduate students and junior faculty who have yet to master these skills, completing a formally written article critique can be a useful process to hone such skills. However, a fundamental knowledge of research methods is still needed in order to be successful. Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form.",
"title": ""
},
{
"docid": "1a7dad648167b1d213d3f26626aaa6e7",
"text": "This paper performs a comprehensive performance analysis of a family of non-data-aided feedforward carrier frequency offset estimators for QAM signals transmitted through AWGN channels in the presence of unknown timing error. The proposed carrier frequency offset estimators are asymptotically (large sample) nonlinear least-squares estimators obtained by exploiting the fourthorder conjugate cyclostationary statistics of the received signal and exhibit fast convergence rates (asymptotic variances on the order of O(N−3), where N stands for the number of samples). The exact asymptotic performance of these estimators is established and analyzed as a function of the received signal sampling frequency, signal-to-noise ratio, timing delay, and number of symbols. It is shown that in the presence of intersymbol interference effects, the performance of the frequency offset estimators can be improved significantly by oversampling (or fractionally sampling) the received signal. Finally, simulation results are presented to corroborate the theoretical performance analysis, and comparisons with the modified Cramér-Rao bound illustrate the superior performance of the proposed nonlinear least-squares carrier frequency offset estimators.",
"title": ""
},
{
"docid": "18243a9ac4961caef5434d3f043b5d78",
"text": "There is a number of automated sign language recognition systems proposed in the computer vision literature. The biggest drawback of all these systems is that every nation has their own culture oriented sign language. In other words, everyone needs to develop a specific sign language recognition system for their nation. Although the main building blocks of all signs are gestures and facial expressions in all sign languages, the nation specific requirements make it difficult to design a multinational recognition framework. In this paper, we focus on the advancements in computer assisted sign language recognition systems. More specifically, we discuss if the ongoing research may trigger the start of an international sign language design. We categorize and present a summary of the current sign language recognition systems. In addition, we present a list of publicly available databases that can be used for designing sign language recognition systems.",
"title": ""
},
{
"docid": "740c04c3521f06b040be03792224bf79",
"text": "Problem management is a critical and expensive element for delivering IT service management and touches various levels of managed IT infrastructure. While problem management has been mostly reactive, recent work is studying how to leverage large problem ticket information from similar IT infrastructures to probatively predict the onset of problems. Because of the sheer size and complexity of problem tickets, supervised learning algorithms have been the method of choice for problem ticket classification, relying on labeled (or pre-classified) tickets from one managed infrastructure to automatically create signatures for similar infrastructures. However, where there are insufficient preclassified data, leveraging human expertise to develop classification rules can be more efficient. In this paper, we describe a rule-based crowdsourcing approach, where experts can author classification rules and a social networkingbased platform (called xPad) is used to socialize and execute these rules by large practitioner communities. Using real data sets from several large IT delivery centers, we demonstrate that this approach balances between two key criteria: accuracy and cost effectiveness.",
"title": ""
},
{
"docid": "c1b79f29ce23b2d0ba97928831302e18",
"text": "Quality assessment of biometric fingerprint images is necessary to ensure high biometric performance in biometric recognition systems. We relate the quality of a fingerprint sample to the biometric performance to ensure an objective and performance oriented benchmark. The proposed quality metric is based on Gabor filter responses and is evaluated against eight contemporary quality estimation methods on four datasets using sample utility derived from the separation of genuine and imposter distributions as benchmark. The proposed metric shows performance and consistency approaching that of the composite NFIQ quality assessment algorithm and is thus a candidate for inclusion in a feature vector introducing the NFIQ 2.0 metric.",
"title": ""
}
] |
scidocsrr
|
1f233e94c0d1a9ad2461643902fa126d
|
Single-Point Active Alignment Method (SPAAM) for Optical See-Through HMD Calibration for Augmented Reality
|
[
{
"docid": "9abb159b6d7894745cbc7ee3aaae4084",
"text": "3D reconstruction of arterial vessels from planar radiographs obtained at several angles around the object has gained increasing interest. The motivating application has been interventional angiography. In order to obtain a three-dimensional reconstruction from a C-arm mounted X-Ray Image Intensifier (XRII) traditionally the trajectory of the source and the detector system is characterized and the pixel size is estimated. The main use of the imaging geometry characterization is to provide a correct 3D-2D mapping between the 3D voxels to be reconstructed and the 2D pixels on the radiographic images. We propose using projection matrices directly in a voxel driven backprojection for the reconstruction as opposed to that of computing all the geometrical parameters, including the imaging parameters. We discuss the simplicity of the entire calibration-reconstruction process, and the fact that it makes the computation of the pixel size, source to detector distance, and other explicit imaging parameters unnecessary. A usual step in the reconstruction is sinogram weighting, in which the projections containing corresponding data from opposing directions have to be weighted before they are filtered and backprojected into the object space. The rotation angle of the C-arm is used in the sinogram weighting. This means that the C-arm motion parameters must be computed from projection matrices. The numerical instability associated with the decomposition of the projection matrices into intrinsic and extrinsic parameters is discussed in the context. The paper then describes our method of computing motion parameters without matrix decomposition. Examples of the calibration results and the associated volume reconstruction are also shown. 1 Background and Justification Interventional angiography has mot iva ted m a n y research and development work on 3D reconstruct ion of arterial vessels f rom planar radiographs obta ined at several angles around the subject . The endovascular therapy of subarachnoid aneurysms using detachable Guglielmi coils is an applicat ion where an imaging",
"title": ""
}
] |
[
{
"docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2",
"text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.",
"title": ""
},
{
"docid": "c2c5f0f8b4647c651211b50411382561",
"text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.",
"title": ""
},
{
"docid": "5cda87e3e8f5e5794db7ec2a523eb807",
"text": "Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.",
"title": ""
},
{
"docid": "a33486dfec199cd51e885d6163082a96",
"text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.",
"title": ""
},
{
"docid": "7c2ac62211ee7070298796241751f027",
"text": "Recently, “platform ecosystem” has received attention as a key business concept. Sustainable growth of platform ecosystems is enabled by platform users supplying and/or demanding content from each other: e.g. Facebook, YouTube or Twitter. The importance and value of user data in platform ecosystems is accentuated since platform owners use and sell the data for their business. Serious concern is increasing about data misuse or abuse, privacy issues and revenue sharing between the different stakeholders. Traditional data governance focuses on generic goals and a universal approach to manage the data of an enterprise. It entails limited support for the complicated situation and relationship of a platform ecosystem where multiple participating parties contribute, use data and share profits. This article identifies data governance factors for platform ecosystems through literature review. The study then surveys the data governance state of practice of four platform ecosystems: Facebook, YouTube, EBay and Uber. Finally, 19 governance models in industry and academia are compared against our identified data governance factors for platform ecosystems to reveal the gaps and limitations.",
"title": ""
},
{
"docid": "9ece8dd1905fe0cba49d0fa8c1b21c62",
"text": "This paper describes the origins and history of multiple resource theory in accounting for di erences in dual task interference. One particular application of the theory, the 4-dimensional multiple resources model, is described in detail, positing that there will be greater interference between two tasks to the extent that they share stages (perceptual/cognitive vs response) sensory modalities (auditory vs visual), codes (visual vs spatial) and channels of visual information (focal vs ambient). A computational rendering of this model is then presented. Examples are given of how the model predicts interference di erences in operational environments. Finally, three challenges to the model are outlined regarding task demand coding, task allocation and visual resource competition.",
"title": ""
},
{
"docid": "6d471fcfa68cfb474f2792892e197a66",
"text": "The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.",
"title": ""
},
{
"docid": "98aaa75d102a76840de89d4876643943",
"text": "DeviceNet and ControlNet are two well known industrial networks based on the CIP protocol (CIP = Control and Information Protocol). Both networks have been developed by Rockwell Automation, but are now owned and maintained by the two manufacturers organizations ODVA (Open DeviceNet Vendors Association) and ControlNet International. ODVA and ControlNet International have introduced the newest member of this family-Ethernet/IP (\"IP\" stands for \"Industrial Protocol\"). The paper describes the techniques and mechanisms that are used to implement a fully consistent set of services and data objects on a TCP/UDP/IP based Ethernet network.",
"title": ""
},
{
"docid": "596bb1265a375c68f0498df90f57338e",
"text": "The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the \" right time \" 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when …",
"title": ""
},
{
"docid": "82f3404012290778ef6392ec240c358b",
"text": "A ball segway is a ballbot-type robot that has a car-like structure. It can move with three omnidirectional-wheel mechanisms to drive the ball while maintaining balance. To obtain stable balancing and transferring simultaneously of the 2D ball segway which is an underactuated system, a control law is designed based on energy method. The energy storage function is formulated to prove the passivity property of the system. Simulation results show the effectiveness of our approach.",
"title": ""
},
{
"docid": "739788a91526e41ea8db63837b61135d",
"text": "Much work in Natural Language Processing (NLP) has been for resource-rich languages, making generalization to new, less-resourced languages challenging. We present two approaches for improving generalization to lowresourced languages by adapting continuous word representations using linguistically motivated subword units: phonemes, morphemes and graphemes. Our method requires neither parallel corpora nor bilingual dictionaries and provides a significant gain in performance over previous methods relying on these resources. We demonstrate the effectiveness of our approaches onNamedEntity Recognition for four languages, namely Uyghur, Turkish, Bengali and Hindi, of which Uyghur and Bengali are low resource languages, and also perform experiments on Machine Translation. Exploiting subwords with transfer learning gives us a boost of +15.2 NER F1 for Uyghur and +9.7 F1 for Bengali. We also show improvements in the monolingual setting where we achieve (avg.) +3 F1 and (avg.) +1.35 BLEU.",
"title": ""
},
{
"docid": "d633f883c3dd61c22796a5774a56375c",
"text": "Neural networks are the topic of this paper. Neural networks are very powerful as nonlinear signal processors, but obtained results are often far from satisfactory. The purpose of this article is to evaluate the reasons for these frustrations and show how to make these neural networks successful. The following are the main challenges of neural network applications: (1) Which neural network architectures should be used? (2) How large should a neural network be? (3) Which learning algorithms are most suitable? The multilayer perceptron (MLP) architecture is unfortunately the preferred neural network topology of most researchers. It is the oldest neural network architecture, and it is compatible with all training softwares. However, the MLP topology is less powerful than other topologies such as bridged multilayer perceptron (BMLP), where connections across layers are allowed. The error-back propagation (EBP) algorithm is the most popular learning algorithm, but it is very slow and seldom gives adequate results. The EBP training process requires 100-1,000 times more iterations than the more advanced algorithms such as Levenberg-Marquardt (LM) or neuron by neuron (NBN) algorithms. What is most important is that the EBP algorithm is not only slow but often it is not able to find solutions for close-to-optimum neural networks. The paper describes and compares several learning algorithms.",
"title": ""
},
{
"docid": "d8de391287150bf580c8d613000d5b84",
"text": "3D integration consists of 3D IC packaging, 3D IC integration, and 3D Si integration. They are different and in general the TSV (through-silicon via) separates 3D IC packaging from 3D IC/Si integrations since the latter two use TSV but 3D IC packaging does not. TSV (with a new concept that every chip or interposer could have two surfaces with circuits) is the heart of 3D IC/Si integrations and is the focus of this investigation. The origin of 3D integration is presented. Also, the evolution, challenges, and outlook of 3D IC/Si integrations are discussed as well as their road maps are presented. Finally, a few generic, low-cost, and thermal-enhanced 3D IC integration system-in-packages (SiPs) with various passive TSV interposers are proposed.",
"title": ""
},
{
"docid": "2512c057299a86d3e461a15b67377944",
"text": "Compressive sensing (CS) is an alternative to Shan-non/Nyquist sampling for the acquisition of sparse or compressible signals. Instead of taking periodic samples, CS measures inner products with M random vectors, where M is much smaller than the number of Nyquist-rate samples. The implications of CS are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems, and sensor networks. In this paper, we propose and study a wideband compressive radio receiver (WCRR) architecture that can efficiently acquire and track FM and other narrowband signals that live within a wide frequency bandwidth. The receiver operates below the Nyquist rate and has much lower complexity than either a traditional sampling system or CS recovery system. Our methods differ from most standard approaches to the problem of CS recovery in that we do not assume that the signals of interest are confined to a discrete set of frequencies, and we do not rely on traditional recovery methods such as l1-minimization. Instead, we develop a simple detection system that identifies the support of the narrowband FM signals and then applies compressive filtering techniques based on discrete prolate spheroidal sequences to cancel interference and isolate the signals. Lastly, a compressive phase-locked loop (PLL) directly recovers the FM message signals.",
"title": ""
},
{
"docid": "30e95ce2c159984c37e4d67c8378689d",
"text": "Injuries are one of the major causes of both death and social inequalities in health in children. This paper reviews and reflects on two decades of empirical studies (1990 to 2009) published in the peer-reviewed medical and public health literature on socioeconomic disparities as regards the five main causes of childhood unintentional injuries (i.e., traffic, drowning, poisoning, burns, falls). Studies have been conducted at both area and individual levels, the bulk of which deal with road traffic, burn, and fall injuries. As a whole and for each injury cause separately, their results support the notion that low socioeconomic status is greatly detrimental to child safety but not in all instances and settings. In light of variations between causes and, within causes, between settings and countries, it is emphasized that the prevention of inequities in child safety requires not only that proximal risk factors of injuries be tackled but also remote and fundamental ones inherent to poverty.",
"title": ""
},
{
"docid": "212f128450a141b5b4c83c8c57d14677",
"text": "Local Authority road networks commonly include roads with different functional characteristics and a variety of construction types, which require maintenance solutions tailored to their needs. Given this background, on local road network, pavement management is founded on the experience of the agency engineers and is often constrained by low budgets and a variety of environmental and external requirements. This paper forms part of a research work that investigates the use of digital techniques for obtaining field data in order to increase safety and reduce labour cost requirements using a semi-automated distress collection and measurement system. More specifically, a definition of a distress detection procedure is presented which aims at producing a result complying more closely to the distress identification manuals and protocols. The process comprises the following two steps: Automated pavement image collection. Images are collected using the high speed digital acquisition system of the Mobile Laboratory designed and implemented by the Department of Civil and Environmental Engineering of the University of Catania; Distress Detection. By way of the Pavement Distress Analyser (PDA), a specialised software, images are adjusted to eliminate their optical distortion. Cracks, potholes and patching are automatically detected and subsequently classified by means of an operator assisted approach. An intense, experimental field survey has made it possible to establish that the procedure obtains more consistent distress measurements than a manual survey thus increasing its repeatability, reducing costs and increasing safety during the survey. Moreover, the pilot study made it possible to validate results coming from a survey carried out under normal traffic conditions, concluding that it is feasible to integrate the procedure into a roadway pavement management system.",
"title": ""
},
{
"docid": "e1a4e8b8c892f1e26b698cd9fd37c3db",
"text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.",
"title": ""
},
{
"docid": "f2940de35ce799b7585a71a7895a5096",
"text": "Permanent magnet (PM) brushless machines having magnets and windings in stator (the so-called stator-PM machines) have attracted more and more attention in the past decade due to its definite advantages of robust structure, high power density, high efficiency, etc. In this paper, an overview of the stator-PM machine is presented, with particular emphasis on concepts, operation principles, machine topologies, electromagnetic performance, and control strategies. Both brushless ac and dc operation modes are described. The key features of the machines, including the merits and drawbacks of the machines, are summarized. Moreover, the latest development of the machines is also discussed.",
"title": ""
},
{
"docid": "c856b76b0c2ce8320b9930e494ce9f4d",
"text": "We developed a telehealth system to administer an autism assessment remotely. The remote assessment system integrates videoconferencing, stimuli presentation, recording, image and video presentation, and electronic assessment scoring into an intuitive software platform. This is an advancement over existing technologies used in telemental health, which currently require several devices. The number of children, adolescents, and adults with autism spectrum disorders (ASDs) has increased dramatically over the past 20 years and is expected to continue to increase in coming years. In general, there are not many clinicians trained in either the diagnosis or treatment of adults with ASD. Given the number of adults with autism in need, a remote assessment system can potentially provide a solution to the lack of trained clinicians. The goal is to make the remote assessment system as close to face-to-face assessment as possible, yet versatile enough to support deployment in underserved areas. The primary challenge to achieving this goal is that the assessment requires social interaction that appears natural and fluid, so the remote system needs to be able to support fluid natural interaction. For this study we developed components to support this type of interaction and integrated these components into a system capable of supporting the entire autistic assessment protocol. We then implemented the system and evaluated the system on real patients. The results suggest that we have achieved our goal in developing a system with high-quality interaction that is easy to use.",
"title": ""
},
{
"docid": "96ea7f2a0fd0a630df87d22d846d1575",
"text": "BACKGROUND\nRecent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.\n\n\nRESULTS\nWe analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.\n\n\nCONCLUSION\nSystems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.",
"title": ""
}
] |
scidocsrr
|
ce0b03bc580595fea7aa27f5c83cc5dd
|
Iron and Magnet Losses and Torque Calculation of Interior Permanent Magnet Synchronous Machines Using Magnetic Equivalent Circuit
|
[
{
"docid": "316e4fa32d0b000e6f833d146a9e0d80",
"text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.",
"title": ""
},
{
"docid": "1eebba5c408031931629077bdfb2a37b",
"text": "This paper presents a lumped-parameter magnetic model for an interior permanent-magnet synchronous machine. The model accounts for the effects of saturation through a nonlinear reluctance-element network used to estimate the-axis inductance. The magnetic model is used to calculate inductance and torque in the presence of saturation. Furthermore, these calculations are compared to those from finite-element analysis with good agreement.",
"title": ""
}
] |
[
{
"docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05",
"text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.",
"title": ""
},
{
"docid": "d1ef00d0860b0cab22280415c17430cb",
"text": "The FreeBSD project has been engaged in ongoing work to provide scalable support for multi-processor computer systems since version 5. Sufficient progress has been made that the C library’s malloc(3) memory allocator is now a potential bottleneck for multi-threaded applications running on multiprocessor systems. In this paper, I present a new memory allocator that builds on the state of the art to provide scalable concurrent allocation for applications. Benchmarks indicate that with this allocator, memory allocation for multi-threaded applications scales well as the number of processors increases. At the same time, single-threaded allocation performance is similar to the previous allocator implementation.",
"title": ""
},
{
"docid": "f93b9c9bc2fbaf05c12d47440dfd9f06",
"text": "A patent-pending, energy-based method is presented for controlling a haptic interface system to ensure stable contact under a wide variety of operating conditions. System stability is analyzed in terms of the time-domain definition of passivity. We define a “Passivity Observer” (PO) which measures energy flow in and out of one or more subsystems in real-time software. Active behavior is indicated by a negative value of the PO at any time. We also define the “Passivity Controller” (PC), an adaptive dissipative element which, at each time sample, absorbs exactly the net energy output (if any) measured by the PO. The method is tested with simulation and implementation in the Excalibur haptic interface system. Totally stable operation was achieved under conditions such as stiffness 100 N/mm or time delays of 15 ms. The PO/PC method requires very little additional computation and does not require a dynamical model to be identified.",
"title": ""
},
{
"docid": "3112c11544c9dfc5dc5cf67e74e4ba4b",
"text": "How long does it take for the human visual system to process a complex natural image? Subjectively, recognition of familiar objects and scenes appears to be virtually instantaneous, but measuring this processing time experimentally has proved difficult. Behavioural measures such as reaction times can be used1, but these include not only visual processing but also the time required for response execution. However, event-related potentials (ERPs) can sometimes reveal signs of neural processing well before the motor output2. Here we use a go/no-go categorization task in which subjects have to decide whether a previously unseen photograph, flashed on for just 20 ms, contains an animal. ERP analysis revealed a frontal negativity specific to no-go trials that develops roughly 150 ms after stimulus onset. We conclude that the visual processing needed to perform this highly demanding task can be achieved in under 150 ms.",
"title": ""
},
{
"docid": "4949c4698dc9ce7fcea196def92afd06",
"text": "Argumentative text has been analyzed both theoretically and computationally in terms of argumentative structure that consists of argument components (e.g., claims, premises) and their argumentative relations (e.g., support, attack). Less emphasis has been placed on analyzing the semantic types of argument components. We propose a two-tiered annotation scheme to label claims and premises and their semantic types in an online persuasive forum, Change My View, with the long-term goal of understanding what makes a message persuasive. Premises are annotated with the three types of persuasive modes: ethos, logos, pathos, while claims are labeled as interpretation, evaluation, agreement, or disagreement, the latter two designed to account for the dialogical nature of our corpus. We aim to answer three questions: 1) can humans reliably annotate the semantic types of argument components? 2) are types of premises/claims positioned in recurrent orders? and 3) are certain types of claims and/or premises more likely to appear in persuasive messages than in nonpersuasive messages?",
"title": ""
},
{
"docid": "6b0cfbadd815713179b2312293174379",
"text": "In order to take full advantage of the SiC devices' high-temperature and high-frequency capabilities, a transformer isolated gate driver is designed for the SiC JFET phase leg module to achieve a fast switching speed of 26V/ns and a small cross-talking voltage of 4.2V in a 650V and 5A inductive load test. Transformer isolated gate drive circuits suitable for high-temperature applications are compared with respect to different criteria. Based on the comparison, an improved edge triggered gate drive topology is proposed. Then, using the proposed gate drive topology, special issues in the phase-leg gate drive design are discussed. Several strategies are implemented to improve the phase-leg gate drive performance and alleviate the cross-talking issue. Simulation and experimental results are given for verification purposes.",
"title": ""
},
{
"docid": "65ed76a0642b3dd58c99b07c35fc635d",
"text": "A novel dual-layer multibeam pillbox antenna with a slotted waveguide radiating part in substrate-integrated waveguide (SIW) technology is proposed. In contrast to previous works, the design goal is to have a multibeam antenna with arbitrary low sidelobes and at the same time a high crossing level between adjacent radiated beams. These two constraints cannot be satisfied simultaneously for any passive and lossless multibeam antenna systems with a single radiating aperture due to beam orthogonality. Here, this limitation is overcome using the “split aperture decoupling” method which consists in using two radiating apertures. Each aperture is associated with a pillbox quasi-optical system with several integrated feed horns in its focal plane so as to steer the main beam in the azimuthal plane. The antenna operates at 24.15 GHz and presents very good scanning performance over an angular sector of ±40°, with a good agreement between full-wave simulations and measurements. The crossover level between adjacent beams is about -3 dB with a sidelobe level lower than -24 dB for the central beam and better than -11 dB for the extreme beam positions. The isolation between feed horns in the same pillbox system is better than 20 dB.",
"title": ""
},
{
"docid": "79c0490d7c19c855812beb8e71e52c54",
"text": "Software engineering project management (SEPM) has been the focus of much recent attention because of the enormous penalties incurred during software development and maintenance resulting from poor management. To date there has been no comprehensive study performed to determine the most significant problems of SEPM, their relative importance, or the research directions necessary to solve them. We conducted a major survey of individuals from all areas of the computer field to determine the general consensus on SEPM problems. Twenty hypothesized problems were submitted to several hundred individuals for their opinions. The 294 respondents validated most of these propositions. None of the propositions was rejected by the respondents as unimportant. A number of research directions were indicated by the respondents which, if followed, the respondents believed would lead to solutions for these problems.",
"title": ""
},
{
"docid": "11306f5ab5083ab36ee70ccc384cce01",
"text": "Tinnitus is a phantom sound (ringing of the ears) that affects quality of life for millions around the world and is associated in most cases with hearing impairment. This symposium will consider evidence that deafferentation of tonotopically organized central auditory structures leads to increased neuron spontaneous firing rates and neural synchrony in the hearing loss region. This region covers the frequency spectrum of tinnitus sounds, which are optimally suppressed following exposure to band-limited noise covering the same frequencies. Cross-modal compensations in subcortical structures may contribute to tinnitus and its modulation by jaw-clenching and eye movements. Yet many older individuals with impaired hearing do not have tinnitus, possibly because age-related changes in inhibitory circuits are better preserved. A brain network involving limbic and other nonauditory regions is active in tinnitus and may be driven when spectrotemporal information conveyed by the damaged ear does not match that predicted by central auditory processing.",
"title": ""
},
{
"docid": "0e88f1e55c4162d5778f353336ac3eb9",
"text": "Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.",
"title": ""
},
{
"docid": "0592df75f5b0f0755bccf71f56bc326f",
"text": "Esports has emerged as a popular genre for players as well as spectators, supporting a global entertainment industry. Esports analytics has evolved to address the requirement for data-driven feedback, and is focused on cyber-athlete evaluation, strategy and prediction. Towards the latter, previous work has used match data from a variety of player ranks from hobbyist to professional players. However, professional players have been shown to behave differently than lower ranked players. Given the comparatively limited supply of professional data, a key question is thus whether mixed-rank match datasets can be used to create data-driven models which predict winners in professional matches and provide a simple in-game statistic for viewers and broadcasters. Here we show that, although there is a slightly reduced accuracy, mixed-rank datasets can be used to predict the outcome of professional matches, with suitably optimized configurations.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "d6b6cbfa8c872b9f9066ea7beda2d2e4",
"text": "Computer Science (CS) Unplugged activities have been deployed in many informal settings to present computing concepts in an engaging manner. To justify use in the classroom, however, it is critical for activities to have a strong educational component. For the past three years, we have been developing and refining a CS Unplugged curriculum for use in middle school classrooms. In this paper, we describe an assessment that maps questions from a comprehensive project to computational thinking (CT) skills and Bloom's Taxonomy. We present results from two different deployments and discuss limitations and implications of our approach.",
"title": ""
},
{
"docid": "a2d6cb5b7e083a959dd2f7596e036c60",
"text": "In this paper a control strategy and sensor concept for a two-wheeled self-balancing robot is proposed. First a mathematical model of the robot is derived using Lagrangian mechanics. Based on the model a full state feedback controller, in combination with two higher-level controls are deployed for stabilization and drive control. A gyroscope, an accelerometer and rotational encoders are used for state determination, introducing a new method of measurement data fusion for the accelerometer and the gyro by using a drift compensation controller. Furthermore measurement proceedings for the model parameters of a real prototype robot are suggested and the control for this robot is designed. The proposed mathematical model, as well as the control strategy are then verified by comparing the behavior of the constructed robot with model simulations.",
"title": ""
},
{
"docid": "07941e1f7a8fd0bbc678b641b80dc037",
"text": "This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called “strong AI”, for which obstacles abound for obtaining pivotal progresses.",
"title": ""
},
{
"docid": "f10eb96de9181085e249fdca1f4a568d",
"text": "This paper argues that tracking, object detection, and model building are all similar activities. We describe a fully automatic system that builds 2D articulated models known as pictorial structures from videos of animals. The learned model can be used to detect the animal in the original video - in this sense, the system can be viewed as a generalized tracker (one that is capable of modeling objects while tracking them). The learned model can be matched to a visual library; here, the system can be viewed as a video recognition algorithm. The learned model can also be used to detect the animal in novel images - in this case, the system can be seen as a method for learning models for object recognition. We find that we can significantly improve the pictorial structures by augmenting them with a discriminative texture model learned from a texture library. We develop a novel texture descriptor that outperforms the state-of-the-art for animal textures. We demonstrate the entire system on real video sequences of three different animals. We show that we can automatically track and identify the given animal. We use the learned models to recognize animals from two data sets; images taken by professional photographers from the Corel collection, and assorted images from the Web returned by Google. We demonstrate quite good performance on both data sets. Comparing our results with simple baselines, we show that, for the Google set, we can detect, localize, and recover part articulations from a collection demonstrably hard for object recognition",
"title": ""
},
{
"docid": "53981a65161ff4cc6c892b986b9720d2",
"text": "Leadership is an important aspect of social organization that affects the processes of group formation, coordination, and decision-making in human societies, as well as in the social system of many other animal species. The ability to identify leaders based on their behavior and the subsequent reactions of others opens opportunities to explore how group decisions are made. Understanding who exerts influence provides key insights into the structure of social organizations. In this paper, we propose a simple yet powerful leadership inference framework extracting group coordination periods and determining leadership based on the activity of individuals within a group. We are able to not only identify a leader or leaders but also classify the type of leadership model that is consistent with observed patterns of group decision-making. The framework performs well in differentiating a variety of leadership models (e.g. dictatorship, linear hierarchy, or local influence). We propose five simple features that can be used to categorize characteristics of each leadership model, and thus make model classification possible. The proposed approach automatically (1) identifies periods of coordinated group activity, (2) determines the identities of leaders, and (3) classifies the likely mechanism by which the group coordination occurred. We demonstrate our framework on both simulated and real-world data: GPS tracks of a baboon troop and video-tracking of fish schools, as well as stock market closing price data of the NASDAQ index. The results of our leadership model are consistent with ground-truthed biological data and the framework finds many known events in financial data which are not otherwise reflected in the aggregate NASDAQ index. Our approach is easily generalizable to any coordinated activity data from interacting entities.",
"title": ""
},
{
"docid": "16192a1cf65f20afd768ff103cd9bad4",
"text": "In the last decade power electronic research focused on the power density maximization mainly to reduce initial systems costs [1]. In the field of data centers and telecom applications, the costs for powering and cooling exceed the purchasing cost in less than 2 years [2]. That causes the changing driving forces in the development of new power supplies to efficiency, while the power density should stay on a high level. The commonly used DC-DC converter in the power supply unit (PSU) for data centers and telecom applications are full bridge phase-shift converters since they meet the demands of high power and efficient power conversion, a compact design and the constant operation frequency allows a simple control and EMI design. The development of the converter with respect to high efficiency has a lot of degrees of freedom. An optimization procedure based on comprehensive analytical models leads to the optimal parameters (e.g. switching frequency, switching devices in parallel and transformer design) for the most efficient design. In this paper a 5kW, 400V–48⋖56V phase-shift PWM converter with LC-output filter is designed for highest efficiency (η ≥99%) with a volume limitation and the consideration of the part-load efficiency. The components dependency as well as the optimal design will be explained. The realized prototype design reaches a calculated efficiency of η = 99.2% under full load condition and a power density of ρ = 36W/in3 (2.2 kW/liter).",
"title": ""
},
{
"docid": "ac9f345fb7f4ec78d53bb31a9d2c248f",
"text": "Purpose: The details of a full simulation of an inline side-coupled 6 MV linear accelerator linac from the electron gun to the target are presented. Commissioning of the above simulation was performed by using the derived electron phase space at the target as an input into Monte Carlo studies of dose distributions within a water tank and matching the simulation results to measurement data. This work is motivated by linac-MR studies, where a validated full linac simulation is first required in order to perform future studies on linac performance in the presence of an external magnetic field. Methods: An electron gun was initially designed and optimized with a 2D finite difference program using Child’s law. The electron gun simulation served as an input to a 6 MV linac waveguide simulation, which consisted of a 3D finite element radio-frequency field solution within the waveguide and electron trajectories determined from particle dynamics modeling. The electron gun design was constrained to match the cathode potential and electron gun current of a Varian 600C, while the linac waveguide was optimized to match the measured target current. Commissioning of the full simulation was performed by matching the simulated Monte Carlo dose distributions in a water tank to measured distributions. Results: The full linac simulation matched all the electrical measurements taken from a Varian 600C and the commissioning process lead to excellent agreements in the dose profile measurements. Greater than 99% of all points met a 1%/1mm acceptance criterion for all field sizes analyzed, with the exception of the largest 40 40 cm2 field for which 98% of all points met the 1%/1mm acceptance criterion and the depth dose curves matched measurement to within 1% deeper than 1.5 cm depth. The optimized energy and spatial intensity distributions, as given by the commissioning process, were determined to be non-Gaussian in form for the inline side-coupled 6 MV linac simulated. Conclusions: An integrated simulation of an inline side-coupled 6 MV linac has been completed and benchmarked matching all electrical and dosimetric measurements to high accuracy. The results showed non-Gaussian spatial intensity and energy distributions for the linac modeled. © 2010 American Association of Physicists in Medicine. DOI: 10.1118/1.3397455",
"title": ""
},
{
"docid": "4edb2f050920936a939a86e26e2afdb2",
"text": "This paper proposes an automatic gesture recognition approach for Indian Sign Language (ISL). Indian sign language uses both hands to represent each alphabet. We propose an approach which addresses local-global ambiguity identification, inter-class variability enhancement for each hand gesture. Hand region is segmented and detected by YCbCr skin color model reference. The shape, texture and finger features of each hand are extracted using Principle Curvature Based Region (PCBR) detector, Wavelet Packet Decomposition (WPD-2) and complexity defects algorithms respectively for hand posture recognition process. To classify each hand posture, multi class non linear support vector machines (SVM) is used, for which a recognition rate of 91.3% is achieved. Dynamic gestures are classified using Dynamic Time Warping (DTW) with the trajectory feature vector with 86.3% recognition rate. The performance of the proposed approach is analyzed with well known classifiers like SVM, KNN & DTW. Experimental results are compared with the conventional and existing algorithms to prove the better efficiency of the proposed approach.",
"title": ""
}
] |
scidocsrr
|
2867d76191bae17a5c2954c4183c5ae8
|
TPOT-RL Applied to Network Routing
|
[
{
"docid": "e7a6bb8f63e35f3fb0c60bdc26817e03",
"text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management",
"title": ""
}
] |
[
{
"docid": "fdfcab6236d74bcc882fde104f457d83",
"text": "In this study, direct and indirect effects of self-esteem, daily internet use and social media addiction to depression levels of adolescents have been investigated by testing a model. This descriptive study was conducted with 1130 students aged between 12 and 18 who are enrolled at different schools in southern region of Aegean. In order to collect data, “Children's Depression Inventory”, “Rosenberg Self-esteem Scale” and “Social Media Addiction Scale” have been used. In order to test the hypotheses Pearson's correlation and structural equation modeling were performed. The findings revealed that self-esteem and social media addiction predict %20 of the daily internet use. Furthermore, while depression was associated with self-esteem and daily internet use directly, social media addiction was affecting depression indirectly. Tested model was able to predict %28 of the depression among adolescents.",
"title": ""
},
{
"docid": "ba0fab446ba760a4cb18405a05cf3979",
"text": "Please c Disaster Summary. — This study aims at understanding the role of education in promoting disaster preparedness. Strengthening resilience to climate-related hazards is an urgent target of Goal 13 of the Sustainable Development Goals. Preparing for a disaster such as stockpiling of emergency supplies or having a family evacuation plan can substantially minimize loss and damages from natural hazards. However, the levels of household disaster preparedness are often low even in disaster-prone areas. Focusing on determinants of personal disaster preparedness, this paper investigates: (1) pathways through which education enhances preparedness; and (2) the interplay between education and experience in shaping preparedness actions. Data analysis is based on face-to-face surveys of adults aged 15 years in Thailand (N = 1,310) and the Philippines (N = 889, female only). Controlling for socio-demographic and contextual characteristics, we find that formal education raises the propensity to prepare against disasters. Using the KHB method to further decompose the education effects, we find that the effect of education on disaster preparedness is mainly mediated through social capital and disaster risk perception in Thailand whereas there is no evidence that education is mediated through observable channels in the Philippines. This suggests that the underlying mechanisms explaining the education effects are highly context-specific. Controlling for the interplay between education and disaster experience, we show that education raises disaster preparedness only for those households that have not been affected by a disaster in the past. Education improves abstract reasoning and anticipation skills such that the better educated undertake preventive measures without needing to first experience the harmful event and then learn later. In line with recent efforts of various UN agencies in promoting education for sustainable development, this study provides a solid empirical evidence showing positive externalities of education in disaster risk reduction. 2017TheAuthors.PublishedbyElsevierLtd.This is an open access article under theCCBY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "0252e39c527c3694da09dac7f136c403",
"text": "It is a generally accepted fact that Off-the-shelf OCR engines do not perform well in unconstrained scenarios like natural scene imagery, where text appears among the clutter of the scene. However, recent research demonstrates that a conventional shape-based OCR engine would be able to produce competitive results in the end-to-end scene text recognition task when provided with a conveniently preprocessed image. In this paper we confirm this finding with a set of experiments where two off-the-shelf OCR engines are combined with an open implementation of a state-of-the-art scene text detection framework. The obtained results demonstrate that in such pipeline, conventional OCR solutions still perform competitively compared to other solutions specifically designed for scene text recognition.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "106af615d24a2867fbfa78d963f64cab",
"text": "The recent development of calibration algorithms has been driven into two major directions: (1) an increasing accuracy of mathematical approaches and (2) an increasing flexibility in usage by reducing the dependency on calibration objects. These two trends, however, seem to be contradictory since the overall accuracy is directly related to the accuracy of the pose estimation of the calibration object and therefore demanding large objects, while an increased flexibility leads to smaller objects or noisier estimation methods. The method presented in this paper aims to resolves this problem in two steps: First, we derive a simple closed-form solution with a shifted focus towards the equation of translation that only solves for the necessary hand-eye transformation. We show that it is superior in accuracy and robustness compared to traditional approaches. Second, we decrease the dependency on the calibration object to a single 3D-point by using a similar formulation based on the equation of translation which is much less affected by the estimation error of the calibration object's orientation. Moreover, it makes the estimation of the orientation obsolete while taking advantage of the higher accuracy and robustness from the first solution, resulting in a versatile method for continuous hand-eye calibration.",
"title": ""
},
{
"docid": "904c285d720f51905c5378821199aac6",
"text": "To evaluate the use of Labrafil® M2125CS as a lipid vehicle for danazol. Further, the possibility of predicting the in vivo behavior with a dynamic in vitro lipolysis model was evaluated. Danazol (28 mg/kg) was administered orally to rats in four formulations: an aqueous suspension, two suspensions in Labrafil® M2125CS (1 and 2 ml/kg) and a solution in Labrafil® M2125CS (4 ml/kg). The obtained absolute bioavailabilities of danazol were 1.5 ± 0.8%; 7.1 ± 0.6%; 13.6 ± 1.4% and 13.3 ± 3.4% for the aqueous suspension, 1, 2 and 4 ml Labrafil® M2125CS per kg respectively. Thus administration of danazol with Labrafil® M2125CS resulted in up to a ninefold increase in the bioavailability, and the bioavailability was dependent on the Labrafil® M2125CS dose. In vitro lipolysis of the formulations was able to predict the rank order of the bioavailability from the formulations, but not the absorption profile of the in vivo study. The bioavailability of danazol increased when Labrafil® M2125CS was used as a vehicle, both when danazol was suspended and solubilized in the vehicle. The dynamic in vitro lipolysis model could be used to rank the bioavailabilities of the in vivo data.",
"title": ""
},
{
"docid": "76e6c05e41c4e6d3c70c8fedec5c323b",
"text": "Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.",
"title": ""
},
{
"docid": "a7f1565d548359c9f19bed304c2fbba6",
"text": "Handwritten character generation is a popular research topic with various applications. Various methods have been proposed in the literatures which are based on methods such as pattern recognition, machine learning, deep learning or others. However, seldom method could generate realistic and natural handwritten characters with a built-in determination mechanism to enhance the quality of generated image and make the observers unable to tell whether they are written by a person. To address these problems, in this paper, we proposed a novel generative adversarial network, multi-scale multi-class generative adversarial network (MSMC-CGAN). It is a neural network based on conditional generative adversarial network (CGAN), and it is designed for realistic multi-scale character generation. MSMC-CGAN combines the global and partial image information as condition, and the condition can also help us to generate multi-class handwritten characters. Our model is designed with unique neural network structures, image features and training method. To validate the performance of our model, we utilized it in Chinese handwriting generation, and an evaluation method called mean opinion score (MOS) was used. The MOS results show that MSMC-CGAN achieved good performance.",
"title": ""
},
{
"docid": "0122057f9fd813efd9f9e0db308fe8d9",
"text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "d45b084040e5f07d39f622fc3543e10b",
"text": "Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3× faster. The code is publicly available at: https://github.com/lzzcd001/OSLSM.",
"title": ""
},
{
"docid": "1447a32a7274ac972d79bbd02c25ecb2",
"text": "Refactoring is a software engineering technique that, by applying a series of small behavior-preserving transformations, can improve a software system’s design, readability and extensibility. Code smells are signs that indicate that source code might need refactoring. The goal of this thesis project was to develop a prototype of a code smell detection plug-in for the Eclipse IDE framework. In earlier research by Van Emden and Moonen, a tool was developed to detect code smells in Java source code and visualize them in graph views. CodeNose, the plug-in prototype created in this thesis project, presents code smells in the Tasks View in Eclipse, similar to the way compiler errors and warnings are presented. These code smell reports provide feedback about the quality of a software system. CodeNose uses the Eclipse JDT parser to build abstract syntax trees that represent the source code. A tree visitor detects primitive code smells and collects derived smell aspects, which are written to a fact database and passed to a relational algebra calculator, the Grok tool. The results of the calculations on these facts can be used to infer more complex code smells. In a case study, the plug-in was tested by performing the code smell detection process on an existing software system. We present the case study results, focusing at performance of the plug-in and usefulness of the code smells that were detected.",
"title": ""
},
{
"docid": "9e804b49534bedcde2611d70c40b255d",
"text": "PURPOSE\nScreening tool of older people's prescriptions (STOPP) and screening tool to alert to right treatment (START) criteria were first published in 2008. Due to an expanding therapeutics evidence base, updating of the criteria was required.\n\n\nMETHODS\nWe reviewed the 2008 STOPP/START criteria to add new evidence-based criteria and remove any obsolete criteria. A thorough literature review was performed to reassess the evidence base of the 2008 criteria and the proposed new criteria. Nineteen experts from 13 European countries reviewed a new draft of STOPP & START criteria including proposed new criteria. These experts were also asked to propose additional criteria they considered important to include in the revised STOPP & START criteria and to highlight any criteria from the 2008 list they considered less important or lacking an evidence base. The revised list of criteria was then validated using the Delphi consensus methodology.\n\n\nRESULTS\nThe expert panel agreed a final list of 114 criteria after two Delphi validation rounds, i.e. 80 STOPP criteria and 34 START criteria. This represents an overall 31% increase in STOPP/START criteria compared with version 1. Several new STOPP categories were created in version 2, namely antiplatelet/anticoagulant drugs, drugs affecting, or affected by, renal function and drugs that increase anticholinergic burden; new START categories include urogenital system drugs, analgesics and vaccines.\n\n\nCONCLUSION\nSTOPP/START version 2 criteria have been expanded and updated for the purpose of minimizing inappropriate prescribing in older people. These criteria are based on an up-to-date literature review and consensus validation among a European panel of experts.",
"title": ""
},
{
"docid": "2cf13325c8901f25418f6c6266106075",
"text": "Knowledge tracing—where a machine models the knowledge of a student as they interact with coursework—is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.",
"title": ""
},
{
"docid": "01f3f3b3693940963f5f2c4f71585a2a",
"text": "BACKGROUND\nStress and anxiety are widely considered to be causally related to alcohol craving and consumption, as well as development and maintenance of alcohol use disorder (AUD). However, numerous preclinical and human studies examining effects of stress or anxiety on alcohol use and alcohol-related problems have been equivocal. This study examined relationships between scores on self-report anxiety, anxiety sensitivity, and stress measures and frequency and intensity of recent drinking, alcohol craving during early withdrawal, as well as laboratory measures of alcohol craving and stress reactivity among heavy drinkers with AUD.\n\n\nMETHODS\nMedia-recruited, heavy drinkers with AUD (N = 87) were assessed for recent alcohol consumption. Anxiety and stress levels were characterized using paper-and-pencil measures, including the Beck Anxiety Inventory (BAI), the Anxiety Sensitivity Index-3 (ASI-3), and the Perceived Stress Scale (PSS). Eligible subjects (N = 30) underwent alcohol abstinence on the Clinical Research Unit; twice daily measures of alcohol craving were collected. On day 4, subjects participated in the Trier Social Stress Test; measures of cortisol and alcohol craving were collected.\n\n\nRESULTS\nIn multivariate analyses, higher BAI scores were associated with lower drinking frequency and reduced drinks/drinking day; in contrast, higher ASI-3 scores were associated with higher drinking frequency. BAI anxiety symptom and ASI-3 scores also were positively related to Alcohol Use Disorders Identification Test total scores and AUD symptom and problem subscale measures. Higher BAI and ASI-3 scores but not PSS scores were related to greater self-reported alcohol craving during early alcohol abstinence. Finally, BAI scores were positively related to laboratory stress-induced cortisol and alcohol craving. In contrast, the PSS showed no relationship with most measures of alcohol craving or stress reactivity.\n\n\nCONCLUSIONS\nOverall, clinically oriented measures of anxiety compared with perceived stress were more strongly associated with a variety of alcohol-related measures in current heavy drinkers with AUD.",
"title": ""
},
{
"docid": "4fb76fb4daa5490dca902c9177c9b465",
"text": "An improved faster region-based convolutional neural network (R-CNN) [same object retrieval (SOR) faster R-CNN] is proposed to retrieve the same object in different scenes with few training samples. By concatenating the feature maps of shallow and deep convolutional layers, the ability of Regions of Interest (RoI) pooling to extract more detailed features is improved. In the training process, a pretrained CNN model is fine-tuned using a query image data set, so that the confidence score can identify an object proposal to the object level rather than the classification level. In the query process, we first select the ten images for which the object proposals have the closest confidence scores to the query object proposal. Then, the image for which the detected object proposal has the minimum cosine distance to the query object proposal is considered as the query result. The proposed SOR faster R-CNN is applied to our Coke cans data set and three public image data sets, i.e., Oxford Buildings 5k, Paris Buildings 6k, and INS 13. The experimental results confirm that SOR faster R-CNN has better identification performance than fine-tuned faster R-CNN. Moreover, SOR faster R-CNN achieves much higher accuracy for detecting low-resolution images than the fine-tuned faster R-CNN on the Coke cans (0.094 mAP higher), Oxford Buildings (0.043 mAP higher), Paris Buildings (0.078 mAP higher), and INS 13 (0.013 mAP higher) data sets.",
"title": ""
},
{
"docid": "6176a2fd4e07d0c72a53c6207af305ca",
"text": "At present, Bluetooth Low Energy (BLE) is dominantly used in commercially available Internet of Things (IoT) devices -- such as smart watches, fitness trackers, and smart appliances. Compared to classic Bluetooth, BLE has been simplified in many ways that include its connection establishment, data exchange, and encryption processes. Unfortunately, this simplification comes at a cost. For example, only a star topology is supported in BLE environments and a peripheral (an IoT device) can communicate with only one gateway (e.g. a smartphone, or a BLE hub) at a set time. When a peripheral goes out of range, it loses connectivity to a gateway, and cannot connect and seamlessly communicate with another gateway without user interventions. In other words, BLE connections do not get automatically migrated or handed-off to another gateway. In this paper, we propose a system which brings seamless connectivity to BLE-capable mobile IoT devices in an environment that consists of a network of gateways. Our framework ensures that unmodified, commercial off-the-shelf BLE devices seamlessly and securely connect to a nearby gateway without any user intervention.",
"title": ""
},
{
"docid": "87c3c488f027ef96b1c2a096c122d1b4",
"text": "We study the label complexity of pool-based active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford (Balcan et al., 2006). This represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.",
"title": ""
},
{
"docid": "8415585161d51b500f99aa36650a67d9",
"text": "A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering.",
"title": ""
}
] |
scidocsrr
|
009f989d9d5125f1e7df7885758f59bc
|
Training Hierarchical Feed-Forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks
|
[
{
"docid": "7eec1e737523dc3b78de135fc71b058f",
"text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches",
"title": ""
},
{
"docid": "305efd1823009fe79c9f8ff52ddb5724",
"text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.",
"title": ""
}
] |
[
{
"docid": "95157487d671b0b2a9d49c09eab58a72",
"text": "The ARCS Model of Motivational Design has been used myriad times to design motivational instructions that focus on attention, relevance, confidence and satisfaction in order to motivate students. The Instructional Materials Motivation Survey (IMMS) is a 36-item situational measure of people’s reactions to instructional materials in the light of the ARCS model. Although the IMMS has been used often, both as a pretest and a posttest tool serving as either a motivational needs assessment prior to instruction or a measure of people’s reactions to instructional materials afterward, the IMMS so far has not been validated extensively, taking statistical and theoretical aspects of the survey into account. This paper describes such an extensive validation study, for which the IMMS was used in a self-directed instructional setting aimed at working with technology (a cellular telephone). Results of structural equation modeling show that the IMMS can be reduced to 12 items. This Reduced Instructional Materials Motivation Survey IMMS (RIMMS) is preferred over the original IMMS. The parsimonious RIMMS measures the four constructs attention, relevance, confidence and satisfaction of the ARCS model well, and reflects its conditional nature. Introduction In the field of educational science, the ARCS Model of Motivational Design (Keller, 1983, 1987a, b, c, 1999, 2010; Keller & Kopp, 1987) has been used myriad times to apply motivational strategies to instructional materials, and to test their effects. Although the model was originally designed to influence student motivation in a classic learning setting, with face-to-face interaction between teacher and students, by now it has also been thoroughly applied to and tested in other settings like computer-assisted instruction, and computer-based and distance education (eg, Astleitner & Hufnagl, 2003; Bellon & Oates, 2002; Chang & Lehman, 2002; Chyung, Winiecki & Fenner, 1999; Keller, 1999; Shellnut, Knowlton & Savage, 1999; Song & Keller, 2001). And in recent years, the ARCS model has been applied to and tested in self-directed, print-based instructional settings, applying it to cell phone user instructions and testing for effects British Journal of Educational Technology Vol 46 No 1 2015 204–218 doi:10.1111/bjet.12138 © 2014 British Educational Research Association on users likely to benefit from motivational instructions (see Loorbach, Karreman & Steehouder, 2007, 2013, b for elaborate descriptions). Keller’s publications on the ARCS model show a similar expansion of its scope as other publications over time. In his early work, Keller (1987a, b, c) speaks of “students’ motivation to learn,” “education,” “course,” “lesson” and “classroom setting.” In 1999, he states that “it is one thing to design for learner motivation in a classroom setting where teachers or facilitators can respond to changes as soon as they sense them. It is a greater challenge to make self-directed learning environments responsive to the motivational requirements of learners” (p. 39). The ARCS Model of Motivational Design The ARCS Model of Motivational Design is based on an extensive review of the motivational literature, which led to a clustering of motivational concepts into four constructs: (A)ttention, (R)elevance, (C)onfidence and (S)atisfaction (Keller, 2010, p. 44). According to Keller (2010, pp. 44–45), the following goals have to be met for people to be motivated to learn: (A) People’s curiosities and interests should be stimulated and sustained. (R) Before people can be motivated to learn, they will have to believe that the instruction is related to important personal goals or motives and feel connected to the setting. (C) Even if people believe the content is relevant and they are curious to learn it, they still might not be appropriately motivated due to too little or too much confidence, or expectancy for Practitioner Notes What is already known about this topic • The ARCS Model of Motivational Design has been used myriad times to design motivational instructions in a wide range of educational settings (from traditional to computer-assisted instruction and distance education). • The Instructional Materials Motivation Survey (IMMS) has been used often to measure people’s reactions to (motivational) instructions. • Several researchers have attempted to validate the IMMS before. What this paper adds • This paper describes an extensive validation of the IMMS. • The IMMS is validated using the results of studies that applied the ARCS model and the IMMS to motivational instructions in a self-directed instructional setting. Participants (seniors between 60 and 70) were likely to benefit from motivational instructions and used the instructions in a self-directed instructional setting. • This validation results in a reduced version of the IMMS that consists of 12 items: the RIMMS. Implications for practice and/or policy • In self-directed instructional settings with users likely to benefit from motivational instructions, the parsimonious Reduced Instructional Materials Motivation Survey (RIMMS) is preferred over the original IMMS to measure people’s reactions to motivational instructions. • The RIMMS measures the four constructs attention, relevance, confidence and satisfaction of the ARCS model well. • The RIMMS also reflects the conditional nature of the underlying ARCS model. Validation of the IMMS 205 © 2014 British Educational Research Association success. They could have well-established fears of the topic, skill, or situation that prevent them from learning effectively. Or, at the other extreme, they might believe incorrectly that they already know it and overlook important details in the learning activities. Keller (2010, p. 46) states that being successful in achieving these first three motivational goals (attention, relevance and confidence) results in people being motivated to learn. (S) To have a continuing desire to learn, people must have feelings of satisfaction with the process or results of the learning experience. The above description of the ARCS model is visualized in Figure 1. Practical strategies described in the ARCS theory and in the Motivational Tactics Checklist (see Keller, 2010, pp. 287–291) can be used to achieve each of the four goals. To measure whether these goals have been met and to measure learners’ motivational needs prior to applying ARCS strategies, Keller (2010, pp. 277–286) designed the Instructional Materials Motivation Survey (IMMS), a 36-item situational measure of people’s reactions to instructional materials in the light of the ARCS model. As such, it measures people’s scores on an attention, relevance, confidence and satisfaction construct, cumulatively resulting in an overall motivation score. Effects of motivational instructions in a self-directed instructional setting A previous study on the effects of ARCS-based motivational instructions in a self-directed instructional setting tested for effects of three motivational manipulations in cell phone user instructions respectively, focusing on attention, relevance and confidence (see Loorbach et al, 2007 for an elaborate description). Seventy-nine Dutch senior participants between 60 and 70 years of age filled out questionnaires and performed three tasks with a cell phone, using either a control version or one of three motivational versions of the user instructions. Participants in this study were seniors because they belong to a user group that is known for being less experienced with relatively new technology devices like cellular telephones (Schwender & Köhler, 2006) and are therefore more likely to benefit from motivational instructions. Results showed that participants using either a version of the instructions that focused on relevance or a version that focused on confidence performed more tasks correctly than participants in the control condition, using instructions without motivational manipulations. This study also showed positive effects of motivational instructions on behavior-deduced motivation. For this measure, we only included participants who did not complete the task, and we checked whether they felt too frustrated and gave up prematurely, or persisted and their efforts were stopped by the researcher after they had been working on the task for 15 minutes. Results showed that participants using the confidence-focused user instructions persisted in working on the third task, where they had to edit a contact’s phone number, more often (p < .05). A tendency toward a similar effect existed for the first task, where they had to change the cell phone’s ring tone (p < .10). So even though the ARCS Model of Motivational Design was not originally designed to increase user motivation in self-directed instructional settings, its potential was discovered for such settings. Its potential was especially discovered concerning confidence-focused instructions, which Figure 1: ARCS model of motivational design 206 British Journal of Educational Technology Vol 46 No 1 2015 © 2014 British Educational Research Association positively affected participants’ task performance and their persistence in trying to complete tasks. This is in line with the expectations of the ARCS model: when it was first developed, Keller (1987c) stated that “differences in confidence, the third major component of the model, can influence a student’s persistence and accomplishment” (p. 5). However, even though the behavior of participants using the control version and participants using the confidence version of the instructions statistically differed in persistence, these findings were non-existent according to their motivation scores on the IMMS. A possible explanation is that participants who used the motivational instructions did have an increased motivation level but were not aware of it, and therefore a self-report measure like the IMMS did not pick up on it, even though their behavior showed otherwise. According to Song and Keller (2",
"title": ""
},
{
"docid": "3171587b5b4554d151694f41206bcb4e",
"text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.",
"title": ""
},
{
"docid": "c3c5931200ff752d8285cc1068e779ee",
"text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.",
"title": ""
},
{
"docid": "cd9e90ba83156a2c092d68022c4227c9",
"text": "The difficulty of integer factorization is fundamental to modern cryptographic security using RSA encryption and signatures. Although a 512-bit RSA modulus was first factored in 1999, 512-bit RSA remains surprisingly common in practice across many cryptographic protocols. Popular understanding of the difficulty of 512-bit factorization does not seem to have kept pace with developments in computing power. In this paper, we optimize the CADO-NFS and Msieve implementations of the number field sieve for use on the Amazon Elastic Compute Cloud platform, allowing a non-expert to factor 512-bit RSA public keys in under four hours for $75. We go on to survey the RSA key sizes used in popular protocols, finding hundreds or thousands of deployed 512-bit RSA keys in DNSSEC, HTTPS, IMAP, POP3, SMTP, DKIM, SSH, and PGP.",
"title": ""
},
{
"docid": "bb770a0cb686fbbb4ea1adb6b4194967",
"text": "Parental refusal of vaccines is a growing a concern for the increased occurrence of vaccine preventable diseases in children. A number of studies have looked into the reasons that parents refuse, delay, or are hesitant to vaccinate their child(ren). These reasons vary widely between parents, but they can be encompassed in 4 overarching categories. The 4 categories are religious reasons, personal beliefs or philosophical reasons, safety concerns, and a desire for more information from healthcare providers. Parental concerns about vaccines in each category lead to a wide spectrum of decisions varying from parents completely refusing all vaccinations to only delaying vaccinations so that they are more spread out. A large subset of parents admits to having concerns and questions about childhood vaccinations. For this reason, it can be helpful for pharmacists and other healthcare providers to understand the cited reasons for hesitancy so they are better prepared to educate their patients' families. Education is a key player in equipping parents with the necessary information so that they can make responsible immunization decisions for their children.",
"title": ""
},
{
"docid": "8fabb9fe465fe70753fe4f035e4513f1",
"text": "Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.",
"title": ""
},
{
"docid": "6131fdbfe28aaa303b1ee4c29a65f766",
"text": "Destination prediction is an essential task for many emerging location based applications such as recommending sightseeing places and targeted advertising based on destination. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, existing techniques using this approach suffer from the “data sparsity problem”, i.e., the available historical trajectories is far from being able to cover all possible trajectories. This problem considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) algorithm to address the data sparsity problem. SubSyn algorithm first decomposes historical trajectories into sub-trajectories comprising two neighbouring locations, and then connects the sub-trajectories into “synthesised” trajectories. The number of query trajectories that can have predicted destinations is exponentially increased by this means. Experiments based on real datasets show that SubSyn algorithm can predict destinations for up to ten times more query trajectories than a baseline algorithm while the SubSyn prediction algorithm runs over two orders of magnitude faster than the baseline algorithm. In this paper, we also consider the privacy protection issue in case an adversary uses SubSyn algorithm to derive sensitive location information of users. We propose an efficient algorithm to select a minimum number of locations a user has to hide on her trajectory in order to avoid privacy leak. Experiments also validate the high efficiency of the privacy protection algorithm.",
"title": ""
},
{
"docid": "a009519d1ed930d40db593542e7c3e0d",
"text": "With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.",
"title": ""
},
{
"docid": "46829dde25c66191bcefae3614c2dd3f",
"text": "User-generated content (UGC) on the Web, especially on social media platforms, facilitates the association of additional information with digital resources; thus, it can provide valuable supplementary content. However, UGC varies in quality and, consequently, raises the challenge of how to maximize its utility for a variety of end-users. This study aims to provide researchers and Web data curators with comprehensive answers to the following questions: What are the existing approaches and methods for assessing and ranking UGC? What features and metrics have been used successfully to assess and predict UGC value across a range of application domains? What methods can be effectively employed to maximize that value? This survey is composed of a systematic review of approaches for assessing and ranking UGC: results are obtained by identifying and comparing methodologies within the context of short text-based UGC on the Web. Existing assessment and ranking approaches adopt one of four framework types: the community-based framework takes into consideration the value assigned to content by a crowd of humans, the end-user--based framework adapts and personalizes the assessment and ranking process with respect to a single end-user, the designer-based framework encodes the software designer’s values in the assessment and ranking method, and the hybrid framework employs methods from more than one of these types. This survey suggests a need for further experimentation and encourages the development of new approaches for the assessment and ranking of UGC.",
"title": ""
},
{
"docid": "45ea8e1e27f6c687d957af561aca5188",
"text": "Impedance matching networks for nonlinear devices such as amplifiers and rectifiers are normally very challenging to design, particularly for broadband and multiband devices. A novel design concept for a broadband high-efficiency rectenna without using matching networks is presented in this paper for the first time. An off-center-fed dipole antenna with relatively high input impedance over a wide frequency band is proposed. The antenna impedance can be tuned to the desired value and directly provides a complex conjugate match to the impedance of a rectifier. The received RF power by the antenna can be delivered to the rectifier efficiently without using impedance matching networks; thus, the proposed rectenna is of a simple structure, low cost, and compact size. In addition, the rectenna can work well under different operating conditions and using different types of rectifying diodes. A rectenna has been designed and made based on this concept. The measured results show that the rectenna is of high power conversion efficiency (more than 60%) in two wide bands, which are 0.9–1.1 and 1.8–2.5 GHz, for mobile, Wi-Fi, and ISM bands. Moreover, by using different diodes, the rectenna can maintain its wide bandwidth and high efficiency over a wide range of input power levels (from 0 to 23 dBm) and load values (from 200 to 2000 Ω). It is, therefore, suitable for high-efficiency wireless power transfer or energy harvesting applications. The proposed rectenna is general and simple in structure without the need for a matching network hence is of great significance for many applications.",
"title": ""
},
{
"docid": "8564762ca6de73d72236f94bc5fe0a7a",
"text": "The current work examines the phenomenon of Virtual Interpersonal Touch (VIT), people touching one another via force-feedback haptic devices. As collaborative virtual environments become utilized more effectively, it is only natural that interactants will have the ability to touch one another. In the current work, we used relatively basic devices to begin to explore the expression of emotion through VIT. In Experiment 1, participants utilized a 2 DOF force-feedback joystick to express seven emotions. We examined various dimensions of the forces generated and subjective ratings of the difficulty of expressing those emotions. In Experiment 2, a separate group of participants attempted to recognize the recordings of emotions generated in Experiment 1. In Experiment 3, pairs of participants attempted to communicate the seven emotions using physical handshakes. Results indicated that humans were above chance when recognizing emotions via VIT, but not as accurate as people expressing emotions through non-mediated handshakes. We discuss a theoretical framework for understanding emotions expressed through touch as well as the implications of the current findings for the utilization of VIT in human computer interaction. Virtual Interpersonal Touch 3 Virtual Interpersonal Touch: Expressing and Recognizing Emotions through Haptic Devices There are many reasons to support the development of collaborative virtual environments (Lanier, 2001). One major criticism of collaborative virtual environments, however, is that they do not provide emotional warmth and nonverbal intimacy (Mehrabian, 1967; Sproull & Kiesler, 1986). In the current work, we empirically explore the augmentation of collaborative virtual environments with simple networked haptic devices to allow for the transmission of emotion through virtual interpersonal touch (VIT). EMOTION IN SOCIAL INTERACTION Interpersonal communication is largely non-verbal (Argyle, 1988), and one of the primary purposes of nonverbal behavior is to communicate subtleties of emotional states between individuals. Clearly, if social interaction mediated by virtual reality and other digital communication systems is to be successful, it will be necessary to allow for a full range of emotional expressions via a number of communication channels. In face-to-face communication, we express emotion primarily through facial expressions, voice, and through touch. While emotion is also communicated through other nonverbal gestures such as posture and hand signals (Cassell & Thorisson, in press; Collier, 1985), in the current review we focus on emotions transmitted via face, voice and touch. In a review of the emotion literature, Ortony and Turner (1990) discuss the concept of basic emotions. These fundamental emotions (e.g., fear) are the building blocks of other more complex emotions (e.g., jealousy). Furthermore, many people argue that these emotions are innate and universal across cultures (Plutchik, 2001). In terms of defining the set of basic emotions, previous work has provided very disparate sets of such emotions. Virtual Interpersonal Touch 4 For example, Watson (1930) has limited his list to “hardwired” emotions such as fear, love, and rage. On the other hand, Ekman & Friesen (1975) have limited their list to those discernable through facial movements such as anger, disgust, fear, joy, sadness, and surprise. The psychophysiology literature adds to our understanding of emotions by suggesting a fundamental biphasic model (Bradley, 2000). In other words, emotions can be thought of as variations on two axes hedonic valence and intensity. Pleasurable emotions have high hedonic valences, while negative emotions have low hedonic valences. This line of research suggests that while emotions may appear complex, much of the variation may nonetheless be mapped onto a two-dimensional scale. This notion also dovetails with research in embodied cognition that has shown that human language is spatially organized (Richardson, Spivey, Edelman, & Naples, 2001). For example, certain words are judged to be more “horizontal” while other words are judged to be more “vertical”. In the current work, we were not concerned predominantly with what constitutes a basic or universal emotion. Instead, we attempted to identify emotions that could be transmitted through virtual touch, and provide an initial framework for classifying and interpreting those digital haptic emotions. To this end, we reviewed theoretical frameworks that have attempted to accomplish this goal with other nonverbal behaviors— most notably facial expressions and paralinguistics. Facial Expressions Research in facial expressions has received much attention from social scientists for the past fifty years. Some researchers argue that the face is a portal to one’s internal mental state (Ekman & Friesen 1978; Izard, 1971). These scholars argue that when an Virtual Interpersonal Touch 5 emotion occurs, a series of biological events follow that produce changes in a person—one of those manifestations is movement in facial muscles. Moreover, these changes in facial expressions are also correlated with other physiological changes such as heart rate or blood pressure (Ekman & Friesen, 1976). Alternatively, other researchers argue that the correspondence of facial expressions to actual emotion is not as high as many think. For example, Fridland (1994) believes that people use facial expressions as a tool to strategically elicit behaviors from others or to accomplish social goals in interaction. Similarly, other researchers argue that not all emotions have corresponding facial expressions (Cacioppo et al., 1997). Nonetheless, most scholars would agree that there is some value to examining facial expressions of another if one’s goal is to gain an understanding of that person’s current mental state. Ekman’s groundbreaking work on emotions has provided tools to begin forming dimensions on which to classify his set of six basic emotions (Ekman & Friesen, 1975). Figure 1 provides a framework for the facial classifications developed by those scholars.",
"title": ""
},
{
"docid": "87f5d228f5c2da8bdb4308eb8aa0fefe",
"text": "The idea that teaching others is a powerful way to learn is intuitively compelling and supported in the research literature. We have developed computer-based, domain-independent Teachable Agents that students can teach using a visual representation. The students query their agent to monitor their learning and problem solving behavior. This motivates the students to learn more so they can teach their agent to perform better. This paper presents a teachable agent called Betty’s Brain that combines learning by teaching with self-regulated learning feedback to promote deep learning and understanding in science domains. A study conducted in a 5 grade science classroom compared three versions of the system: a version where the students were taught by an agent, a baseline learning by teaching version, and a learning by teaching version where students received feedback on self-regulated learning strategies and some domain content. In the other two systems, students received feedback primarily on domain content. Our results indicate that all three groups showed learning gains during a main study where students learnt about river ecosystems, but the two learning by teaching groups performed better than the group that was taught. These differences persisted in the transfer study, but the gap between the baseline learning by teaching and self-regulated learning group decreased. However, there are indications that self-regulated learning feedback better prepared students to learn in new domains, even when they no longer had access to the self-regulation environment.",
"title": ""
},
{
"docid": "4a8c8c09fe94cddbc9cadefa014b1165",
"text": "A solution to trajectory-tracking control problem for a four-wheel-steering vehicle (4WS) is proposed using sliding-mode approach. The advantage of this controller over current control procedure is that it is applicable to a large class of vehicles with single or double steering and to a tracking velocity that is not necessarily constant. The sliding-mode approach make the solutions robust with respect to errors and disturbances, as demonstrated by the simulation results.",
"title": ""
},
{
"docid": "2bc6775efec2b59ad35b9f4841c7f3cf",
"text": "Cryptographic schemes for computing on encrypted data promise to be a fundamental building block of cryptography. The way one models such algorithms has a crucial effect on the efficiency and usefulness of the resulting cryptographic schemes. As of today, almost all known schemes for fully homomorphic encryption, functional encryption, and garbling schemes work by modeling algorithms as circuits rather than as Turing machines. As a consequence of this modeling, evaluating an algorithm over encrypted data is as slow as the worst-case running time of that algorithm, a dire fact for many tasks. In addition, in settings where an evaluator needs a description of the algorithm itself in some “encoded” form, the cost of computing and communicating such encoding is as large as the worst-case running time of this algorithm. In this work, we construct cryptographic schemes for computing Turing machines on encrypted data that avoid the worst-case problem. Specifically, we show: – An attribute-based encryption scheme for any polynomial-time Turing machine and Random Access Machine (RAM). – A (single-key and succinct) functional encryption scheme for any polynomialtime Turing machine. – A reusable garbling scheme for any polynomial-time Turing machine. These three schemes have the property that the size of a key or of a garbling for a Turing machine is very short: it depends only on the description of the Turing machine and not on its running time. Previously, the only existing constructions of such schemes were for depth-d circuits, where all the parameters grow with d. Our constructions remove this depth d restriction, have short keys, and moreover, avoid the worst-case running time. – A variant of fully homomorphic encryption scheme for Turing machines, where one can evaluate a Turing machine M on an encrypted input x in time that is dependent on the running time of M on input x as opposed to the worst-case runtime of M . Previously, such a result was known only for a restricted class of Turing machines and it required an expensive preprocessing phase (with worst-case runtime); our constructions remove both restrictions. Our results are obtained via a reduction from SNARKs (Bitanski et al) and an “extractable” variant of witness encryption, a scheme introduced by Garg et al.. We prove that the new assumption is secure in the generic group model. We also point out the connection between (the variant of) witness encryption and the obfuscation of point filter functions as defined by Goldwasser and Kalai in 2005.",
"title": ""
},
{
"docid": "782c8958fa9107b8d1087fe0c79de6ee",
"text": "Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions. The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.",
"title": ""
},
{
"docid": "85cf0bddbedc5836f41033a16274c1e2",
"text": "Intuitively, for a training sample xi with its associated label yi, a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying xi, which becomes easier as the higher layers distill xi into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more information about the ground truth, but this would be incorrect.",
"title": ""
},
{
"docid": "0b0f1f0ae9b1efc0c63071f705c1575e",
"text": "This paper presents the effects of design parameters on output efficiency and input impedance of RF-to-DC Dick-son charge pumps by varying input parameters in simulation. Diode parasitics and input impedance mismatch between the charge pump and antenna are found to significantly decrease the effectiveness of charge pumps, while stage capacitance size appears to have little effect on efficiency. Off-the-shelf diodes are also compared through simulation to find which diodes perform best at each ISM frequency band and various power levels. The investigation is summarized by guidelines to assist designers in developing efficient RF-to-DC Dickson charge pumps as well as concluding with a design methodology.",
"title": ""
},
{
"docid": "872d1f216a463b354221be8b68d35d96",
"text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.",
"title": ""
},
{
"docid": "49e6e256d7e5e7dacb366635f1a3fd8b",
"text": "We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state management, high-level reasoning, and robot control. In a user evaluation, 31 participants interacted with the bartender in a range of social situations. Most customers successfully obtained a drink from the bartender in all scenarios, and the factors that had the greatest impact on subjective satisfaction were task success and dialogue efficiency.",
"title": ""
},
{
"docid": "e4b02298a2ff6361c0a914250f956911",
"text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.",
"title": ""
}
] |
scidocsrr
|
cfb47af999dca1c2850f5151ce5d3bc0
|
Energy efficiency in industry 4.0 using SDN
|
[
{
"docid": "2a5710aeaba7e39c5e08c1a5310c89f6",
"text": "We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task.",
"title": ""
},
{
"docid": "1857eb0d2d592961bd7c1c2f226df616",
"text": "The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios.",
"title": ""
},
{
"docid": "3bde1d560a93c776f179eb91fd4675ce",
"text": "The development of Industry 4.0 will be accompanied by changing tasks and demands for the human in the factory. As the most flexible entity in cyber-physical production systems, workers will be faced with a large variety of jobs ranging from specification and monitoring to verification of production strategies. Through technological support it is guaranteed that workers can realize their full potential and adopt the role of strategic decision-makers and flexible problem-solvers. The use of established interaction technologies and metaphors from the consumer goods market seems to be promising. This paper demonstrates solutions for the technological assistance of workers, which implement the representation of a cyber-physical world and the therein occurring interactions in the form of intelligent user interfaces. Besides technological means, the paper points out the requirement for adequate qualification strategies, which will create the required, inter-disciplinary understanding for Industry 4.0.",
"title": ""
},
{
"docid": "ab55e142e1250e2056feecb9ac0ccc1d",
"text": "The next generation of industrial advancement which is referred as Industry 4.0 aims to inter-connect and computerize the traditional industrys such as manufacturing. The objective in Industry 4.0 is to make the factories smart enough in terms of improved adaptability, resource efficiency as well as the improved integration of supply and demand processes between the factories. Wireless communication will play a key role in enabling the Industry 4.0 systems and technologies. In this paper we focus the discussion on some of the key wireless communication challenges that will need to be met for the Industry 4.0 era. We look at how the 5th generation of communication standard may address these requirements. For machine to machine communication the three main design criterions that can be considered are latency, longevity and the reliability of communication. We take an example of WiFi communication, and benchmark it against the requirements, so as to emphasize the improvements required in wireless protocols.",
"title": ""
}
] |
[
{
"docid": "8538dea1bed2a699e99e5d89a91c5297",
"text": "Friction is primary disturbance in motion control. Different types of friction cause diminution of original torque in a DC motor, such as static friction, viscous friction etc. By some means if those can be determined and compensated, the friction effect from the DC motor can be neglected. It would be a great advantage for control systems. Authors have determined the types of frictions as well as frictional coefficients and suggested a unique way of compensating the friction in a DC motor using Disturbance Observer Method which is used to determine the disturbance torques acting on a DC motor. In simulation approach, the method is modelled using MATLAB and the results have been obtained and analysed. The block diagram consists with DC motor model with DOB and RTOB. Practical approach of the implemented block diagram is shown by the obtained results. It is discussed the possibility of applying this to real life applications.",
"title": ""
},
{
"docid": "e003dd850e8ca294a45e2bec122945c3",
"text": "In this paper, we address the problem of determining optimal hyper-parameters for support vector machines (SVMs). The standard way for solving the model selection problem is to use grid search. Grid search constitutes an exhaustive search over a pre-defined discretized set of possible parameter values and evaluating the cross-validation error until the best is found. We developed a bi-level optimization approach to solve the model selection problem for linear and kernel SVMs, including the extension to learn several kernel parameters. Using this method, we can overcome the discretization of the parameter space using continuous optimization, and the complexity of the method only increases linearly with the number of parameters (instead of exponentially using grid search). In experiments, we determine optimal hyper-parameters based on different smooth estimates of the cross-validation error and find that only very few iterations of bi-level optimization yield good classification rates.",
"title": ""
},
{
"docid": "4d089acf0f7e1bae074fc4d9ad8ee7e3",
"text": "The consequences of exodontia include alveolar bone resorption and ultimately atrophy to basal bone of the edentulous site/ridges. Ridge resorption proceeds quickly after tooth extraction and significantly reduces the possibility of placing implants without grafting procedures. The aims of this article are to describe the rationale behind alveolar ridge augmentation procedures aimed at preserving or minimizing the edentulous ridge volume loss. Because the goal of these approaches is to preserve bone, exodontia should be performed to preserve as much of the alveolar process as possible. After severance of the supra- and subcrestal fibrous attachment using scalpels and periotomes, elevation of the tooth frequently allows extraction with minimal socket wall damage. Extraction sockets should not be acutely infected and be completely free of any soft tissue fragments before any grafting or augmentation is attempted. Socket bleeding that mixes with the grafting material seems essential for success of this procedure. Various types of bone grafting materials have been suggested for this purpose, and some have shown promising results. Coverage of the grafted extraction site with wound dressing materials, coronal flap advancement, or even barrier membranes may enhance wound stability and an undisturbed healing process. Future controlled clinical trials are necessary to determine the ideal regimen for socket augmentation.",
"title": ""
},
{
"docid": "e5104baa94ee849d3544c865443a2223",
"text": "Modern attacks are being made against client side applications, such as web browsers, which most users use to surf and communicate on the internet. Client honeypots visit and interact with suspect web sites in order to detect and collect information about malware to protect users from malicious websites or to allow security professionals to investigate malicious content. This paper will present the idea of using web-based technology and integrating it with a client honeypot by building a low interaction client honeypot tool called Honeyware. It describes the benefits of Honeyware as well as the challenges of a low interaction client honeypot and provides some ideas for how these challenges could be overcome.",
"title": ""
},
{
"docid": "e507c60b8eb437cbd6ca9692f1bf8727",
"text": "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.",
"title": ""
},
{
"docid": "c3bc93b903ccf89fab5100688714d705",
"text": "Incidents of computer abuse, proprietary information leaks and other security lapses have been on an increase. Most often, such security lapses are attributed to internal employees in organizations subverting established organizational information security policy (ISP). As employee compliance with ISP is the key to escalating information security breaches, understanding employee motivation for following ISP is critical. Using the Thomas and Velthouse’s (1990) intrinsic motivation model, we investigate the role of intrinsic motivation for ISP compliance. Through survey data collected from 289 participants, the study assesses how psychological empowerment, as derived from information security task, may impact the information security performance of the participants, which is measured by their compliance with ISP. The study demonstrates that the psychological empowerment has a positive impact on participants’ ISP compliance intention. Furthermore, the psychological empowerment can be predicted by structural empowerment practices, particularly security education, training, and awareness (SETA), access to information security strategy and goals, and participation in information security decision-making. In addition, the psychological empowerment may act as a mediator for the relations between structural empowerment practices and participants’ ISP compliance. Theoretical contributions, managerial implications, and directions for future research of this study are discussed.",
"title": ""
},
{
"docid": "d17bb9bdaad70d15f21dbde1a2be594d",
"text": "Holistic 3D indoor scene understanding refers to jointly recovering the i) object bounding boxes, ii) room layout, and iii) camera pose, all in 3D. The existing methods either are ineffective or only tackle the problem partially. In this paper, we propose an end-to-end model that simultaneously solves all three tasks in realtime given only a single RGB image. The essence of the proposed method is to improve the prediction by i) parametrizing the targets (e.g., 3D boxes) instead of directly estimating the targets, and ii) cooperative training across different modules in contrast to training these modules individually. Specifically, we parametrize the 3D object bounding boxes by the predictions from several modules, i.e., 3D camera pose and object attributes. The proposed method provides two major advantages: i) The parametrization helps maintain the consistency between the 2D image and the 3D world, thus largely reducing the prediction variances in 3D coordinates. ii) Constraints can be imposed on the parametrization to train different modules simultaneously. We call these constraints \"cooperative losses\" as they enable the joint training and inference. We employ three cooperative losses for 3D bounding boxes, 2D projections, and physical constraints to estimate a geometrically consistent and physically plausible 3D scene. Experiments on the SUN RGB-D dataset shows that the proposed method significantly outperforms prior approaches on 3D object detection, 3D layout estimation, 3D camera pose estimation, and holistic scene understanding.",
"title": ""
},
{
"docid": "dd145aafe2f80b132e02c05eab2df870",
"text": "By performing a systematic study of the Hénon map, we find low-period sinks for parameter values extremely close to the classical ones. This raises the question whether or not the well-known Hénon attractor-the attractor of the Hénon map existing for the classical parameter values-is a strange attractor, or simply a stable periodic orbit. Using results from our study, we conclude that even if the latter were true, it would be practically impossible to establish this by computing trajectories of the map.",
"title": ""
},
{
"docid": "1abc8cbd17d1de7ee50430eb65b62fec",
"text": "Digital immersion is moving into public space. Interactive screens and public displays are deployed in urban environments, malls, and shop windows. Inner city areas, airports, train stations and stadiums are experiencing a transformation from traditional to digital displays enabling new forms of multimedia presentation and new user experiences. Imagine a walkway with digital displays that allows a user to immerse herself in her favorite content while moving through public space. In this paper we discuss the fundamentals for creating exciting public displays and multimedia experiences enabling new forms of engagement with digital content. Interaction in public space and with public displays can be categorized in phases, each having specific requirements. Attracting, engaging and motivating the user are central design issues that are addressed in this paper. We provide a comprehensive analysis of the design space explaining mental models and interaction modalities and we conclude a taxonomy for interactive public display from this analysis. Our analysis and the taxonomy are grounded in a large number of research projects, art installations and experience. With our contribution we aim at providing a comprehensive guide for designers and developers of interactive multimedia on public displays.",
"title": ""
},
{
"docid": "625f1f11e627c570e26da9f41f89a28b",
"text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "eebf03df49eb4a99f61d371e059ef43e",
"text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].",
"title": ""
},
{
"docid": "6c75e0532f637448cdec57bf30e76a4e",
"text": "A wide range of machine learning problems, including astronomical inference about galaxy clusters, natural image scene classification, parametric statistical inference, and predictions of public opinion, can be well-modeled as learning a function on (samples from) distributions. This thesis explores problems in learning such functions via kernel methods. The first challenge is one of computational efficiency when learning from large numbers of distributions: the computation of typicalmethods scales between quadratically and cubically, and so they are not amenable to large datasets. We investigate the approach of approximate embeddings into Euclidean spaces such that inner products in the embedding space approximate kernel values between the source distributions. We present a new embedding for a class of information-theoretic distribution distances, and evaluate it and existing embeddings on several real-world applications. We also propose the integration of these techniques with deep learning models so as to allow the simultaneous extraction of rich representations for inputs with the use of expressive distributional classifiers. In a related problem setting, common to astrophysical observations, autonomous sensing, and electoral polling, we have the following challenge: when observing samples is expensive, but we can choose where we would like to do so, how do we pick where to observe? We propose the development of a method to do so in the distributional learning setting (which has a natural application to astrophysics), as well as giving a method for a closely related problem where we search for instances of patterns by making point observations. Our final challenge is that the choice of kernel is important for getting good practical performance, but how to choose a good kernel for a given problem is not obvious. We propose to adapt recent kernel learning techniques to the distributional setting, allowing the automatic selection of good kernels for the task at hand. Integration with deep networks, as previously mentioned, may also allow for learning the distributional distance itself. Throughout, we combine theoretical results with extensive empirical evaluations to increase our understanding of the methods.",
"title": ""
},
{
"docid": "67b5bd59689c325365ac765a17886169",
"text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.",
"title": ""
},
{
"docid": "e6bb946ea2984ccb54fd37833bb55585",
"text": "11 Automatic Vehicles Counting and Recognizing (AVCR) is a very challenging topic in transport engineering having important implications for the modern transport policies. Implementing a computer-assisted AVCR in the most vital districts of a country provides a large amount of measurements which are statistically processed and analyzed, the purpose of which is to optimize the decision-making of traffic operation, pavement design, and transportation planning. Since the advent of computer vision technology, video-based surveillance of road vehicles has become a key component in developing autonomous intelligent transportation systems. In this context, this paper proposes a Pattern Recognition system which employs an unsupervised clustering algorithm with the objective of detecting, counting and recognizing a number of dynamic objects crossing a roadway. This strategy defines a virtual sensor, whose aim is similar to that of an inductive-loop in a traditional mechanism, i.e. to extract from the traffic video streaming a number of signals containing anarchic information about the road traffic. Then, the set of signals is filtered with the aim of conserving only motion’s significant patterns. Resulted data are subsequently processed by a statistical analysis technique so as to estimate and try to recognize a number of clusters corresponding to vehicles. Finite Mixture Models fitted by the EM algorithm are used to assess such clusters, which provides ∗Corresponding author Email addresses: hana.rabbouch@gmail.com (Hana RABBOUCH), foued.saadaoui@gmail.com (Foued SAÂDAOUI), rafaa_mraihi@yahoo.fr (Rafaa MRAIHI) Preprint submitted to Journal of LTEX Templates April 21, 2017",
"title": ""
},
{
"docid": "815950cb5c3d3c8bc489c34c2598c626",
"text": "In four studies, the authors investigated the proposal that in the context of an elite university, individuals from relatively lower socioeconomic status (SES) backgrounds possess a stigmatized identity and, as such, experience (a) concerns regarding their academic fit and (b) self-regulatory depletion as a result of managing these concerns. Study 1, a correlational study, revealed the predicted associations between SES, concerns about academic fit, and self-regulatory strength. Results from Studies 2 and 3 suggested that self-presentation involving the academic domain is depleting for lower (but not higher) SES students: After a self-presentation task about academic achievement, lower SES students consumed more candy (Study 2) and exhibited poorer Stroop performance (Study 3) relative to their higher SES peers; in contrast, the groups did not differ after discussing a nonacademic topic (Study 3). Study 4 revealed the potential for eliminating the SES group difference in depletion via a social comparison manipulation. Taken together, these studies support the hypothesis that managing concerns about marginality can have deleterious consequences for self-regulatory resources.",
"title": ""
},
{
"docid": "19b041beb43aadfbde514dc5bb7f7da5",
"text": "The European Train Control System (ETCS) is the leading signaling system for train command and control. In the future, ETCS may be delivered over long-term evolution (LTE) networks. Thus, LTE performance offered to ETCS must be analyzed and confronted with the railway safety requirements. It is especially important to ensure the integrity of the ETCS data, i.e., to protect ETCS data against loss and corruption. In this article, various retransmission mechanisms are considered for providing end-to-end ETCS data integrity in LTE. These mechanisms are validated in simulations, which model worst-case conditions regarding train locations, traffic load, and base-station density. The simulation results show that ETCS data integrity requirements can be fulfilled even under these unfavorable conditions with the proper LTE mechanisms.",
"title": ""
},
{
"docid": "b4f06236b0babb6cd049c8914170d7bf",
"text": "We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.",
"title": ""
},
{
"docid": "1e8ebfd3773a91534f00e27d899c522a",
"text": "Potential impacts of projected climate change on biodiversity are often assessed using single-species bioclimatic ‘envelope’ models. Such models are a special case of species distribution models in which the current geographical distribution of species is related to climatic variables so to enable projections of distributions under future climate change scenarios. This work reviews a number of critical methodological issues that may lead to uncertainty in predictions from bioclimatic modelling. Particular attention is paid to recent developments of bioclimatic modelling that address some of these issues as well as to the topics where more progress needs to be made. Developing and applying bioclimatic models in a informative way requires good understanding of a wide range of methodologies, including the choice of modelling technique, model validation, collinearity, autocorrelation, biased sampling of explanatory variables, scaling and impacts of nonclimatic factors. A key challenge for future research is integrating factors such as land cover, direct CO2 effects, biotic interactions and dispersal mechanisms into species-climate models. We conclude that, although bioclimatic envelope models have a number of important advantages, they need to be applied only when users of models have a thorough understanding of their limitations and uncertainties.",
"title": ""
},
{
"docid": "eebeb59c737839e82ecc20a748b12c6b",
"text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.",
"title": ""
}
] |
scidocsrr
|
749466410f80db68ff91b3e2a31105c2
|
Subjectivity and sentiment analysis of Arabic: Trends and challenges
|
[
{
"docid": "c757cc329886c1192b82f36c3bed8b7f",
"text": "Though much research has been conducted on Subjectivity and Sentiment Analysis (SSA) during the last decade, little work has focused on Arabic. In this work, we focus on SSA for both Modern Standard Arabic (MSA) news articles and dialectal Arabic microblogs from Twitter. We showcase some of the challenges associated with SSA on microblogs. We adopted a random graph walk approach to extend the Arabic SSA lexicon using ArabicEnglish phrase tables, leading to improvements for SSA on Arabic microblogs. We used different features for both subjectivity and sentiment classification including stemming, part-of-speech tagging, as well as tweet specific features. Our classification features yield results that surpass Arabic SSA results in the literature.",
"title": ""
},
{
"docid": "3553d1dc8272bf0366b2688e5107aa3f",
"text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.",
"title": ""
}
] |
[
{
"docid": "93dd889fe9be3209be31e77c7191ac17",
"text": "The aim of this review is to provide greater insight and understanding regarding the scientific nature of cycling. Research findings are presented in a practical manner for their direct application to cycling. The two parts of this review provide information that is useful to athletes, coaches and exercise scientists in the prescription of training regimens, adoption of exercise protocols and creation of research designs. Here for the first time, we present rationale to dispute prevailing myths linked to erroneous concepts and terminology surrounding the sport of cycling. In some studies, a review of the cycling literature revealed incomplete characterisation of athletic performance, lack of appropriate controls and small subject numbers, thereby complicating the understanding of the cycling research. Moreover, a mixture of cycling testing equipment coupled with a multitude of exercise protocols stresses the reliability and validity of the findings. Our scrutiny of the literature revealed key cycling performance-determining variables and their training-induced metabolic responses. The review of training strategies provides guidelines that will assist in the design of aerobic and anaerobic training protocols. Paradoxically, while maximal oxygen uptake (V-O(2max)) is generally not considered a valid indicator of cycling performance when it is coupled with other markers of exercise performance (e.g. blood lactate, power output, metabolic thresholds and efficiency/economy), it is found to gain predictive credibility. The positive facets of lactate metabolism dispel the 'lactic acid myth'. Lactate is shown to lower hydrogen ion concentrations rather than raise them, thereby retarding acidosis. Every aspect of lactate production is shown to be advantageous to cycling performance. To minimise the effects of muscle fatigue, the efficacy of employing a combination of different high cycling cadences is evident. The subconscious fatigue avoidance mechanism 'teleoanticipation' system serves to set the tolerable upper limits of competitive effort in order to assure the athlete completion of the physical challenge. Physiological markers found to be predictive of cycling performance include: (i) power output at the lactate threshold (LT2); (ii) peak power output (W(peak)) indicating a power/weight ratio of > or =5.5 W/kg; (iii) the percentage of type I fibres in the vastus lateralis; (iv) maximal lactate steady-state, representing the highest exercise intensity at which blood lactate concentration remains stable; (v) W(peak) at LT2; and (vi) W(peak) during a maximal cycling test. Furthermore, the unique breathing pattern, characterised by a lack of tachypnoeic shift, found in professional cyclists may enhance the efficiency and metabolic cost of breathing. The training impulse is useful to characterise exercise intensity and load during training and competition. It serves to enable the cyclist or coach to evaluate the effects of training strategies and may well serve to predict the cyclist's performance. Findings indicate that peripheral adaptations in working muscles play a more important role for enhanced submaximal cycling capacity than central adaptations. Clearly, relatively brief but intense sprint training can enhance both glycolytic and oxidative enzyme activity, maximum short-term power output and V-O(2max). To that end, it is suggested to replace approximately 15% of normal training with one of the interval exercise protocols. Tapering, through reduction in duration of training sessions or the frequency of sessions per week while maintaining intensity, is extremely effective for improvement of cycling time-trial performance. Overuse and over-training disabilities common to the competitive cyclist, if untreated, can lead to delayed recovery.",
"title": ""
},
{
"docid": "559637a4f8f5b99bb3210c5c7d03d2e0",
"text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "2752c235aea735a04b70272deb042ea6",
"text": "Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.",
"title": ""
},
{
"docid": "0c7b5a51a0698f261d147b2aa77acc83",
"text": "The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering); and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an on-going crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.",
"title": ""
},
{
"docid": "62d76b82614c64d022409081c71796a5",
"text": "The statistical modeling of large multi-relational datasets has increasingly gained attention in recent years. Typical applications involve large knowledge bases like DBpedia, Freebase, YAGO and the recently introduced Google Knowledge Graph that contain millions of entities, hundreds and thousands of relations, and billions of relational tuples. Collective factorization methods have been shown to scale up to these large multi-relational datasets, in particular in form of tensor approaches that can exploit the highly scalable alternating least squares (ALS) algorithms for calculating the factors. In this paper we extend the recently proposed state-of-the-art RESCAL tensor factorization to consider relational type-constraints. Relational type-constraints explicitly define the logic of relations by excluding entities from the subject or object role. In addition we will show that in absence of prior knowledge about type-constraints, local closed-world assumptions can be approximated for each relation by ignoring unobserved subject or object entities in a relation. In our experiments on representative large datasets (Cora, DBpedia), that contain up to millions of entities and hundreds of type-constrained relations, we show that the proposed approach is scalable. It further significantly outperforms RESCAL without type-constraints in both, runtime and prediction quality.",
"title": ""
},
{
"docid": "6fb0aac60ec74b5efca4eeda22be979d",
"text": "Images captured in hazy or foggy weather conditions are seriously degraded by the scattering of atmospheric particles, which directly influences the performance of outdoor computer vision systems. In this paper, a fast algorithm for single image dehazing is proposed based on linear transformation by assuming that a linear relationship exists in the minimum channel between the hazy image and the haze-free image. First, the principle of linear transformation is analyzed. Accordingly, the method of estimating a medium transmission map is detailed and the weakening strategies are introduced to solve the problem of the brightest areas of distortion. To accurately estimate the atmospheric light, an additional channel method is proposed based on quad-tree subdivision. In this method, average grays and gradients in the region are employed as assessment criteria. Finally, the haze-free image is obtained using the atmospheric scattering model. Numerous experimental results show that this algorithm can clearly and naturally recover the image, especially at the edges of sudden changes in the depth of field. It can, thus, achieve a good effect for single image dehazing. Furthermore, the algorithmic time complexity is a linear function of the image size. This has obvious advantages in running time by guaranteeing a balance between the running speed and the processing effect.",
"title": ""
},
{
"docid": "903b68096d2559f0e50c38387260b9c8",
"text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.",
"title": ""
},
{
"docid": "cf121f496ae49eed2846b5be05d35d4d",
"text": "Objective: This study provides evidence for the validity and reliability of the Rey Auditory Verbal Learning Test",
"title": ""
},
{
"docid": "d9cdbff5533837858b1cd8334acd128d",
"text": "A four-leaf steel spring used in the rear suspension system of light vehicles is analyzed using ANSYS V5.4 software. The finite element results showing stresses and deflections verified the existing analytical and experimental solutions. Using the results of the steel leaf spring, a composite one made from fiberglass with epoxy resin is designed and optimized using ANSYS. Main consideration is given to the optimization of the spring geometry. The objective was to obtain a spring with minimum weight that is capable of carrying given static external forces without failure. The design constraints were stresses (Tsai–Wu failure criterion) and displacements. The results showed that an optimum spring width decreases hyperbolically and the thickness increases linearly from the spring eyes towards the axle seat. Compared to the steel spring, the optimized composite spring has stresses that are much lower, the natural frequency is higher and the spring weight without eye units is nearly 80% lower. 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "deca482835114a5a0fd6dbdc62ae54d0",
"text": "This paper presents an approach to design the transformer and the link inductor for the high-frequency link matrix converter. The proposed method aims to systematize the design process of the HF-link using analytic and software tools. The models for the characterization of the core and winding losses have been reviewed. Considerations about the practical implementation and construction of the magnetic devices are also provided. The software receives the inputs from the mathematical analysis and runs the optimization to find the best design. A 10 kW / 20 kHz transformer plus a link inductor are designed using this strategy achieving a combined efficiency of 99.32%.",
"title": ""
},
{
"docid": "c2d926337d32cf88838546d19e6f9bde",
"text": "This paper discusses the use of natural language or „conversational‟ agents in e-learning environments. We describe and contrast the various applications of conversational agent technology represented in the e-learning literature, including tutors, learning companions, language practice and systems to encourage reflection. We offer two more detailed examples of conversational agents, one which provides learning support, and the other support for self-assessment. Issues and challenges for developers of conversational agent systems for e-learning are identified and discussed.",
"title": ""
},
{
"docid": "8b5ea4603ac53a837c3e81dfe953a706",
"text": "Many teaching practices implicitly assume that conceptual knowledge can be abstracted from the situations in which it is learned and used. This article argues that this assumption inevitably limits the effectiveness of such practices. Drawing on recent research into cognition as it is manifest in everyday activity, the authors argue that knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used. They discuss how this view of knowledge affects our understanding of learning, and they note that conventional schooling too often ignores the influence of school culture on what is learned in school. As an alternative to conventional practices, they propose cognitive apprenticeship (Collins, Brown, Newman, in press), which honors the situated nature of knowledge. They examine two examples of mathematics instruction that exhibit certain key features of this approach to teaching. The breach between learning and use, which is captured by the folk categories \"know what\" and \"know how,\" may well be a product of the structure and practices of our education system. Many methods of didactic education assume a separation between knowing and doing, treating knowledge as an integral, self-sufficient substance, theoretically independent of the situations in which it is learned and used. The primary concern of schools often seems to be the transfer of this substance, which comprises abstract, decontextualized formal concepts. The activity and context in which learning takes place are thus regarded as merely ancillary to learning---pedagogically useful, of course, but fundamentally distinct and even neutral with respect to what is learned. Recent investigations of learning, however, challenge this separating of what is learned from how it is learned and used. The activity in which knowledge is developed and deployed, it is now argued, is not separable from or ancillary to learning and cognition. Nor is it neutral. Rather, it is an integral part of what is learned. Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated. In this paper, we try to explain in a deliberately speculative way, why activity and situations are integral to cognition and learning, and how different ideas of what is appropriate learning activity produce very different results. We suggest that, by ignoring the situated nature of cognition, education defeats its own goal of providing useable, robust knowledge. And conversely, we argue that approaches such as cognitive apprenticeship (Collins, Brown, & Newman, in press) that embed learning in activity and make deliberate use of the social and physical context are more in line with the understanding of learning and cognition that is emerging from research. Situated Knowledge and Learning Miller and Gildea's (1987) work on vocabulary teaching has shown how the assumption that knowing and doing can be separated leads to a teaching method that ignores the way situations structure cognition. Their work has described how children are taught words from dictionary definitions and a few exemplary sentences, and they have compared this method with the way vocabulary is normally learned outside school. People generally learn words in the context of ordinary communication. This process is startlingly fast and successful. Miller and Gildea note that by listening, talking, and reading, the average 17-year-old has learned vocabulary at a rate of 5,000 words per year (13 per day) for over 16 years. By contrast, learning words from abstract definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way:definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way: \"Me and my parents correlate, because without them I wouldn't be here.\" \"I was meticulous about falling off the cliff.\" \"Mrs. Morrow stimulated the soup.\" Given the method, such mistakes seem unavoidable. Teaching from dictionaries assumes that definitions and exemplary sentences are self-contained \"pieces\" of knowledge. But words and sentences are not islands, entire unto themselves. Language use would involve an unremitting confrontation with ambiguity, polysemy, nuance, metaphor, and so forth were these not resolved with the extralinguistic help that the context of an utterance provides (Nunberg, 1978). Prominent among the intricacies of language that depend on extralinguistic help are indexical words --words like I, here, now, next, tomorrow, afterwards, this. Indexical terms are those that \"index\"or more plainly point to a part of the situation in which communication is being conducted. They are not merely contextsensitive; they are completely context-dependent. Words like I or now, for instance, can only be interpreted in the 'context of their use. Surprisingly, all words can be seen as at least partially indexical (Barwise & Perry, 1983). Experienced readers implicitly understand that words are situated. They, therefore, ask for the rest of the sentence or the context before committing themselves to an interpretation of a word. And they go to dictionaries with situated examples of usage in mind. The situation as well as the dictionary supports the interpretation. But the students who produced the sentences listed had no support from a normal communicative situation. In tasks like theirs, dictionary definitions are assumed to be self-sufficient. The extralinguistic props that would structure, constrain, and ultimately allow interpretation in normal communication are ignored. Learning from dictionaries, like any method that tries to teach abstract concepts independently of authentic situations, overlooks the way understanding is developed through continued, situated use. This development, which involves complex social negotiations, does not crystallize into a categorical definition. Because it is dependent on situations and negotiations, the meaning of a word cannot, in principle, be captured by a definition, even when the definition is supported by a couple of exemplary sentences. All knowledge is, we believe, like language. Its constituent parts index the world and so are inextricably a product of the activity and situations in which they are produced. A concept, for example, will continually evolve with each new occasion of use, because new situations, negotiations, and activities inevitably recast it in a new, more densely textured form. So a concept, like the meaning of a word, is always under construction. This would also appear to be true of apparently well-defined, abstract technical concepts. Even these are not wholly definable and defy categorical description; part of their meaning is always inherited from the context of use. Learning and tools. To explore the idea that concepts are both situated and progressively developed through activity, we should abandon any notion that they are abstract, self-contained entities. Instead, it may be more useful to consider conceptual knowledge as, in some ways, similar to a set of tools. Tools share several significant features with knowledge: They can only be fully understood through use, and using them entails both changing the user's view of the world and adopting the belief system of the culture in which they are used. First, if knowledge is thought of as tools, we can illustrate Whitehead's (1929) distinction between the mere acquisition of inert concepts and the development of useful, robust knowledge. It is quite possible to acquire a tool but to be unable to use it. Similarly, it is common for students to acquire algorithms, routines, and decontextualized definitions that they cannot use and that, therefore, lie inert. Unfortunately, this problem is not always apparent. Old-fashioned pocket knives, for example, have a device for removing stones from horses' hooves. People with this device may know its use and be able to talk wisely about horses, hooves, and stones. But they may never betray --or even recognize --that they would not begin to know how to use this implement on a horse. Similarly, students can often manipulate algorithms, routines, and definitions they have acquired with apparent competence and yet not reveal, to their teachers or themselves, that they would have no idea what to do if they came upon the domain equivalent of a limping horse. People who use tools actively rather than just acquire them, by contrast, build an increasingly rich implicit understanding of the world in which they use the tools and of the tools themselves. The understanding, both of the world and of the tool, continually changes as a result of their interaction. Learning and acting are interestingly indistinct, learning being a continuous, life-long process resulting from acting in situations. Learning how to use a tool involves far more than can be accounted for in any set of explicit rules. The occasions and conditions for use arise directly out of the context of activities of each community that uses the tool, framed by the way members of that community see the world. The community and its viewpoint, quite as much as the tool itself, determine how a tool is used. Thus, carpenters and cabinet makers use chisels differently. Because tools and the way they are used reflect the particular accumulated insights of communities, it is not ",
"title": ""
},
{
"docid": "e28ee6e29f61652f752ef311ebb40eaa",
"text": "The increasing prevalence of Distributed Denial of Service (DDoS) attacks on the Internet has led to the wide adoption of DDoS Protection Service (DPS), which is typically provided by Content Delivery Networks (CDNs) and is integrated with CDN's security extensions. The effectiveness of DPS mainly relies on hiding the IP address of an origin server and rerouting the traffic to the DPS provider's distributed infrastructure, where malicious traffic can be blocked. In this paper, we perform a measurement study on the usage dynamics of DPS customers and reveal a new vulnerability in DPS platforms, called residual resolution, by which a DPS provider may leak origin IP addresses when its customers terminate the service or switch to other platforms, resulting in the failure of protection from future DPS providers as adversaries are able to discover the origin IP addresses and launch the DDoS attack directly to the origin servers. We identify that two major DPS/CDN providers, Cloudflare and Incapsula, are vulnerable to such residual resolution exposure, and we then assess the magnitude of the problem in the wild. Finally, we discuss the root causes of residual resolution and the practical countermeasures to address this security vulnerability.",
"title": ""
},
{
"docid": "40db41aa0289dbf45bef067f7d3e3748",
"text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.",
"title": ""
},
{
"docid": "0e6ed8195ef4ebadf86d881770c78137",
"text": "In mixed radio-frequency (RF) and digital designs, noise from high-speed digital circuits can interfere with RF receivers, resulting in RF interference issues such as receiver desensitization. In this paper, an effective methodology is proposed to estimate the RF interference received by an antenna due to near-field coupling, which is one of the common noise-coupling mechanisms, using decomposition method based on reciprocity. In other words, the noise-coupling problem is divided into two steps. In the first step, the coupling from the noise source to a Huygens surface that encloses the antenna is studied, with the actual antenna structure removed, and the induced tangential electromagnetic fields due to the noise source on this surface are obtained. In the second step, the antenna itself with the same Huygens surface is studied. The antenna is treated as a transmitting one and the induced tangential electromagnetic fields on the surface are obtained. Then, the reciprocity theory is used and the noise power coupled to the antenna port in the original problem is estimated based on the results obtained in the two steps. The proposed methodology is validated through comparisons with full-wave simulations. It fits well with engineering practice, and is particularly suitable for prelayout wireless system design and planning.",
"title": ""
},
{
"docid": "88033862d9fac08702977f1232c91f3a",
"text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.",
"title": ""
},
{
"docid": "a280f710b0e41d844f1b9c76e7404694",
"text": "Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed.",
"title": ""
},
{
"docid": "c2e53358f9d78071fc5204624cf9d6ad",
"text": "This paper explores how the adoption of mobile and social computing technologies has impacted upon the way in which we coordinate social group-activities. We present a diary study of 36 individuals that provides an overview of how group coordination is currently performed as well as the challenges people face. Our findings highlight that people primarily use open-channel communication tools (e.g., text messaging, phone calls, email) to coordinate because the alternatives are seen as either disrupting or curbing to the natural conversational processes. Yet the use of open-channel tools often results in conversational overload and a significant disparity of work between coordinating individuals. This in turn often leads to a sense of frustration and confusion about coordination details. We discuss how the findings argue for a significant shift in our thinking about the design of coordination support systems.",
"title": ""
},
{
"docid": "67f13c2b686593398320d8273d53852f",
"text": "Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.",
"title": ""
},
{
"docid": "09f812cae6c8952d27ef86168906ece8",
"text": "Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception.<<ETX>>",
"title": ""
}
] |
scidocsrr
|
b780732230781a8e75ef08a1dfef0842
|
Quantitatively Evaluating GANs With Divergences Proposed for Training
|
[
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
},
{
"docid": "6573629e918822c0928e8cf49f20752c",
"text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",
"title": ""
}
] |
[
{
"docid": "ad0f2a93aa00e7711f28bd0dd1482367",
"text": "Many applications in mobile robotics and especially industrial applications require that the robot has a precise estimate about its pose. In this paper, we analyze the accuracy of an integrated laser-based robot pose estimation and positioning system for mobile platforms. For our analysis, we used a highly accurate motion capture system to precisely determine the error in the robot's pose. We are able to show that by combining standard components such as Monte-Carlo localization, KLD sampling, and scan matching, an accuracy of a few millimeters at taught-in reference locations can be achieved. We believe that this is an important analysis for developers of robotic applications in which pose accuracy matters.",
"title": ""
},
{
"docid": "3a9b15a7c21144ffcb26453473cadaa6",
"text": "In this paper, the design and realization of microstrip-based ultra-wideband (UWB) composite bandpass filter (BPF) with short-circuited stubs is presented. The BPF is compositely constructed from the step impedance lowpass filter (LPF) and the optimum distributed highpass filter (HPF). Prior to the realization, the performances of filter and its physical dimension are investigated numerically to obtain the optimum design. The BPF is deployed on a grounded FR4 Epoxy dielectric substrate with the thickness of 0.8mm and the dimension of 25mm x 25mm. From the measurement result, it shows that the realized UWB composite BPF has 3dB bandwidth response of 10.03GHz ranges from 1.86GHz to 11.89GHz.",
"title": ""
},
{
"docid": "80d1237fff963ebf4bcc5fab67c68f4e",
"text": "Researchers have studied whether some youth are \"addicted\" to video games, but previous studies have been based on regional convenience samples. Using a national sample, this study gathered information about video-gaming habits and parental involvement in gaming, to determine the percentage of youth who meet clinical-style criteria for pathological gaming. A Harris poll surveyed a randomly selected sample of 1,178 American youth ages 8 to 18. About 8% of video-game players in this sample exhibited pathological patterns of play. Several indicators documented convergent and divergent validity of the results: Pathological gamers spent twice as much time playing as nonpathological gamers and received poorer grades in school; pathological gaming also showed comorbidity with attention problems. Pathological status significantly predicted poorer school performance even after controlling for sex, age, and weekly amount of video-game play. These results confirm that pathological gaming can be measured reliably, that the construct demonstrates validity, and that it is not simply isomorphic with a high amount of play.",
"title": ""
},
{
"docid": "d6c34d138692851efdbb807a89d0fcca",
"text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.",
"title": ""
},
{
"docid": "30a4239a93234d2c07e6618f4da730fa",
"text": "BACKGROUND\nAortic stiffness is a marker of cardiovascular disease and an independent predictor of cardiovascular risk. Although an association between inflammatory markers and increased arterial stiffness has been suggested, the causative relationship between inflammation and arterial stiffness has not been investigated.\n\n\nMETHODS AND RESULTS\nOne hundred healthy individuals were studied according to a randomized, double-blind, sham procedure-controlled design. Each substudy consisted of 2 treatment arms, 1 with Salmonella typhi vaccination and 1 with sham vaccination. Vaccination produced a significant (P<0.01) increase in pulse wave velocity (at 8 hours by 0.43 m/s), denoting an increase in aortic stiffness. Wave reflections were reduced significantly (P<0.01) by vaccination (decrease in augmentation index of 5.0% at 8 hours and 2.5% at 32 hours) as a result of peripheral vasodilatation. These effects were associated with significant increases in inflammatory markers such as high-sensitivity C-reactive protein (P<0.001), high-sensitivity interleukin-6 (P<0.001), and matrix metalloproteinase-9 (P<0.01). With aspirin pretreatment (1200 mg PO), neither pulse wave velocity nor augmentation index changed significantly after vaccination (increase of 0.11 m/s and 0.4%, respectively; P=NS for both).\n\n\nCONCLUSIONS\nThis is the first study to show through a cause-and-effect relationship that acute systemic inflammation leads to deterioration of large-artery stiffness and to a decrease in wave reflections. These findings have important implications, given the importance of aortic stiffness for cardiovascular function and risk and the potential of therapeutic interventions with antiinflammatory properties.",
"title": ""
},
{
"docid": "ce48548c0004b074b18f95792f3e6ce8",
"text": "In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification. We first design a new auxiliary task based on sentiment scores of domain-independent words. We then propose two neural network architectures to respectively induce document embeddings and sentence embeddings that work well for different domains. When these document and sentence embeddings are used for sentiment classification, we find that with both pseudo and external sentiment lexicons, our proposed methods can perform similarly to or better than several highly competitive domain adaptation methods on a benchmark dataset of product reviews.",
"title": ""
},
{
"docid": "068c988de4a53acec3bc58d7e8c6ba69",
"text": "The popularity of cloud-based interactive computing services (e.g., virtual desktops) brings new management challenges. Each interactive user leaves abundant but fluctuating residual resources while being intolerant to latency, precluding the use of aggressive VM consolidation. In this paper, we present the Resource Harvester for Interactive Clouds (RHIC), an autonomous management framework that harnesses dynamic residual resources aggressively without slowing the harvested interactive services. RHIC builds ad-hoc clusters for running throughput-oriented \"background\" workloads using a hybrid of residual and dedicated resources. For a given background job, RHIC intelligently discovers/maintains the ideal cluster size and composition, to meet user-specified goals such as cost/energy minimization or deadlines. RHIC employs black-box workload performance modeling, requiring only system-level metrics and incorporating techniques to improve modeling accuracy under bursty and heterogeneous residual resources. Our results show that RHIC finds near-ideal cluster sizes/compositions across a wide range of workload/goal combinations, significantly outperforms alternative approaches, tolerates high instability in the harvested interactive cloud, works with heterogeneous hardware and imposes minimal overhead.",
"title": ""
},
{
"docid": "40fef2ba4ae0ecd99644cf26ed8fa37f",
"text": "Plant has plenty use in foodstuff, medicine and industry. And it is also vitally important for environmental protection. However, it is an important and difficult task to recognize plant species on earth. Designing a convenient and automatic recognition system of plants is necessary and useful since it can facilitate fast classifying plants, and understanding and managing them. In this paper, a leaf database from different plants is firstly constructed. Then, a new classification method, referred to as move median centers (MMC) hypersphere classifier, for the leaf database based on digital morphological feature is proposed. The proposed method is more robust than the one based on contour features since those significant curvature points are hard to find. Finally, the efficiency and effectiveness of the proposed method in recognizing different plants is demonstrated by experiments. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9d61458cc1eecbc1c44067552a8841f2",
"text": "In many application domains, data are represented using large graphs involving millions of vertices and edges. Graph analysis algorithms, such as finding short paths and isomorphic subgraphs, are largely dominated by memory latency. Large cluster-based computing platforms can process graphs efficiently if the graph data can be partitioned, and on a smaller scale partitioning can be used to allocate graphs to low-latency on-chip RAMs in reconfigurable devices. However, there are many graph classes, such as scale-free social networks, which lack the locality to make partitioning graph data an efficient solution to the latency problem and are far too large to fit in on-chip RAMs and caches. In this paper, we present a framework for reconfigurable hardware acceleration of these large-scale graph problems that are difficult to partition and require high-latency off-chip memory storage. Our reconfigurable architecture tolerates off-chip memory latency by using a memory crossbar that connects many parallel identical processing elements to shared off-chip memory, without a traditional cached memory hierarchy. Quantitative comparison between the software and hardware performance of a graphlet counting case-study shows that our hardware implementation outperforms a quad-core software implementation by 10 times for large graphs. This speedup includes all software and IO overhead required, and reduces execution time for this common bioinformatics algorithm from about 2 hours to just 12 minutes. These results demonstrate that our methodology for accelerating graph algorithms is a promising approach for efficient parallel graph processing.",
"title": ""
},
{
"docid": "50d6f6a65099ce0ffb804f15a9adcaa1",
"text": "Machine Learning (ML) algorithms are now used in a wide range of application domains in society. Naturally, software implementations of these algorithms have become ubiquitous. Faults in ML software can cause substantial losses in these application domains. Thus, it is very critical to conduct effective testing of ML software to detect and eliminate its faults. However, testing ML software is difficult, partly because producing test oracles used for checking behavior correctness (such as using expected properties or expected test outputs) is challenging. In this paper, we propose an approach of multiple-implementation testing to test supervised learning software, a major type of ML software. In particular, our approach derives a test input’s proxy oracle from the majority-voted output running the test input of multiple implementations of the same algorithm (based on a pre-defined percentage threshold). Our approach reports likely those test inputs whose outputs (produced by an implementation under test) are different from the majority-voted outputs as failing tests. We evaluate our approach on two highly popular supervised learning algorithms: k-Nearest Neighbor (kNN) and Naive Bayes (NB). Our results show that our approach is highly effective in detecting faults in real-world supervised learning software. In particular, our approach detects 13 real faults and 1 potential fault from 19 kNN implementations and 16 real faults from 7 NB implementations. Our approach can even detect 7 real faults and 1 potential fault among the three popularly used open-source ML projects (Weka, RapidMiner,",
"title": ""
},
{
"docid": "acd0450b78a83819bf54b82efdf7668f",
"text": "Localization of mult i-agent systems is a fundamental requirement for multi-agent systems to operate and cooperate properly. The problem of localization can be divided into two categories; one in which a -priori informat ion is available and the second where the global position is to be asce rtained without a-priori informat ion. This paper gives a comprehensive survey of localization techniques that exist in the literature for both the categories with the objectives of knowing the current state-of-the-art, helping in selecting the proper approach in a given scenario and promoting research in this area. A detailed description of methods that exist in the literature are provided in considerable detail. Then these methods are compared, and their weaknesses and strengths are discussed. Finally, some future research recommendations are drawn out of this survey.",
"title": ""
},
{
"docid": "090af7b180f3e9d289d158f8ee385da9",
"text": "Natural medicines were the only option for the prevention and treatment of human diseases for thousands of years. Natural products are important sources for drug development. The amounts of bioactive natural products in natural medicines are always fairly low. Today, it is very crucial to develop effective and selective methods for the extraction and isolation of those bioactive natural products. This paper intends to provide a comprehensive view of a variety of methods used in the extraction and isolation of natural products. This paper also presents the advantage, disadvantage and practical examples of conventional and modern techniques involved in natural products research.",
"title": ""
},
{
"docid": "d846edbd57098464fa2b0f05e0e54942",
"text": "This paper explores recent developments in agile systems engineering. We draw a distinction between agility in the systems engineering process versus agility in the resulting system itself. In the first case the emphasis is on carefully exploring the space of design alternatives and to delay the freeze point as long as possible as new information becomes available during product development. In the second case we are interested in systems that can respond to changed requirements after initial fielding of the system. We provide a list of known and emerging methods in both domains and explore a number of illustrative examples such as the case of the Iridium satellite constellation or recent developments in the automobile industry.",
"title": ""
},
{
"docid": "ee377d6087c66b617ed3667499685d34",
"text": "This paper is aimed to propose a noise power ratio (NPR) measurement method with fewer tones than traditionally used. Accurate measurement of NPR distortion is achieved by averaging distortion power measured at the notch frequency excited by multi-tone signals with different random phases. Automatic measurement software is developed to perform all NPR measurement procedures. Measurement results show that the variance is below 0.4 dB after averaging 100 NPR distortions excited by 60-tone. Compared to the NPR measurement results obtained by a more-typical 10000-tone stimulus, the measurement error is 0.23 dB using only 60-tone signals with average.",
"title": ""
},
{
"docid": "7d0a7073733f8393478be44d820e89ae",
"text": "Modeling user-item interaction patterns is an important task for personalized recommendations. Many recommender systems are based on the assumption that there exists a linear relationship between users and items while neglecting the intricacy and non-linearity of real-life historical interactions. In this paper, we propose a neural network based recommendation model (NeuRec) that untangles the complexity of user-item interactions and establish an integrated network to combine non-linear transformation with latent factors. We further design two variants of NeuRec: userbased NeuRec and item-based NeuRec, by focusing on different aspects of the interaction matrix. Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task.",
"title": ""
},
{
"docid": "cd096d5e7c687facb8fa4edb0c1d3bbf",
"text": "We introduce a novel variational method that allows to approximately integrate out kernel hyperparameters, such as length-scales, in Gaussian process regression. This approach consists of a novel variant of the variational framework that has been recently developed for the Gaussian process latent variable model which additionally makes use of a standardised representation of the Gaussian process. We consider this technique for learning Mahalanobis distance metrics in a Gaussian process regression setting and provide experimental evaluations and comparisons with existing methods by considering datasets with high-dimensional inputs.",
"title": ""
},
{
"docid": "dbb78bd6d76a1080edd86c2a857fbbcb",
"text": "In this paper a phase detector is introduced which has a similar phase detector response as the Alexander phase detector. Both the Alexander and proposed phase detector are analyzed with respect to their robustness. The analysis shows that the novel phase detector is more robust against process non-idealities than the Alexander, with a 75% reduction in the variation of static phase offsets. The proposed phase detector also consumes less power and requires less area. A CDR circuit which implements the proposed phase detector was designed and fabricated in a 0.18mum six metal layer standard CMOS process. The fabricated CDR circuit can lock to pseudo-random bit sequences (PRBS) up to 231 - 1 at data rates from 5 - 6.25Gb/s. For a PRBS of 231 - 1 at 6.25Gb/s the measured rms jitter and peak-to-peak jitter were 1.7ps and 11ps.",
"title": ""
},
{
"docid": "d80d52806cbbdd6148e3db094eabeed7",
"text": "We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thought was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's illumination from its image, which would then allow correction of the image colors to those relative to a standard illuminant, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting `scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of he ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.",
"title": ""
},
{
"docid": "7c4104651e484e4cbff5735d62f114ef",
"text": "A pair of salient tradeoffs have driven the multiple-input multiple-output (MIMO) systems developments. More explicitly, the early era of MIMO developments was predominantly motivated by the multiplexing-diversity tradeoff between the Bell Laboratories layered space-time and space-time block coding. Later, the linear dispersion code concept was introduced to strike a flexible tradeoff. The more recent MIMO system designs were motivated by the performance-complexity tradeoff, where the spatial modulation and space-time shift keying concepts eliminate the problem of inter-antenna interference and perform well with the aid of low-complexity linear receivers without imposing a substantial performance loss on generic maximum-likelihood/max a posteriori -aided MIMO detection. Against the background of the MIMO design tradeoffs in both uncoded and coded MIMO systems, in this treatise, we offer a comprehensive survey of MIMO detectors ranging from hard decision to soft decision. The soft-decision MIMO detectors play a pivotal role in approaching to the full-performance potential promised by the MIMO capacity theorem. In the near-capacity system design, the soft-decision MIMO detection dominates the total complexity, because all the MIMO signal combinations have to be examined, when both the channel’s output signal and the a priori log-likelihood ratios gleaned from the channel decoder are taken into account. Against this background, we provide reduced-complexity design guidelines, which are conceived for a wide-range of soft-decision MIMO detectors.",
"title": ""
}
] |
scidocsrr
|
192b4c9bc55138881188e98c53de918b
|
Wide Cell Pitch LPT(II)-CSTBT™(III) technology rating up to 6500 V for low loss
|
[
{
"docid": "2d9921e49e58725c9c85da02249c8d27",
"text": "Recently, the performance of Si power devices gradually approaches the physical limit, and the latest SiC device seemingly has the ability to substitute the Si insulated gate bipolar transistor (IGBT) in 1200 V class. In this paper, we demonstrate the feasibility of further improving the Si IGBT based on the new concept of CSTBTtrade. In point of view of low turn-off loss and high uniformity in device characteristics, we employ the techniques of fine-pattern and retro grade doping in the design of new device structures, resulting in significant reduction on the turn-off loss and the VGE(th) distribution, respectively.",
"title": ""
},
{
"docid": "3420aa0f36f8114a7c3962bf443bf884",
"text": "In this paper, for the first time, 600 ∼ 6500 V IGBTs utilizing a new vertical structure of “Light Punch-Through (LPT) (II)” with Thin Wafer Process Technology demonstrate high total performance with low overall loss and high safety operating area (SOA) capability. This collector structure enables a wide position in the trade-off characteristics between on-state voltage (VCE(sat)) and turn-off loss (EOFF) without utilizing any conventional carrier lifetime technique. In addition, this device concept achieves a wide operating junction temperature (@218 ∼ 423 K) of IGBT without the snap-back phenomena (≤298 K) and thermal destruction (≥398 K). From the viewpoint of the high performance of IGBT, the breaking limitation of any Si wafer size, the proposed LPT(II) concept that utilizes an FZ silicon wafer and Thin Wafer Technology is the most promising candidate as a vertical structure of IGBT for the any voltage class.",
"title": ""
}
] |
[
{
"docid": "7d472441fb112f0851bcfe6854b8663e",
"text": "Detection and recognition of traffic sign, including various road signs and text, play an important role in autonomous driving, mapping/navigation and traffic safety. In this paper, we proposed a traffic sign detection and recognition system by applying deep convolutional neural network (CNN), which demonstrates high performance with regard to detection rate and recognition accuracy. Compared with other published methods which are usually limited to a predefined set of traffic signs, our proposed system is more comprehensive as our target includes traffic signs, digits, English letters and Chinese characters. The system is based on a multi-task CNN trained to acquire effective features for the localization and classification of different traffic signs and texts. In addition to the public benchmarking datasets, the proposed approach has also been successfully evaluated on a field-captured Chinese traffic sign dataset, with performance confirming its robustness and suitability to real-world applications.",
"title": ""
},
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "38fd6a2b2ea49fda599a70ec7e803cde",
"text": "The role of trace elements in biological systems has been described in several animals. However, the knowledge in fish is mainly limited to iron, copper, manganese, zinc and selenium as components of body fluids, cofactors in enzymatic reactions, structural units of non-enzymatic macromolecules, etc. Investigations in fish are comparatively complicated as both dietary intake and waterborne mineral uptake have to be considered in determining the mineral budgets. The importance of trace minerals as essential ingredients in diets, although in small quantities, is also evident in fish.",
"title": ""
},
{
"docid": "0db1caadc1f568ceaeafa6f063bf013b",
"text": "The modern musician enjoys access to a staggering number of audio samples. Composition software can ship with many gigabytes of data, and there are many more to be found online. However, conventional methods for navigating these libraries are still quite rudimentary, and often involve scrolling through alphabetical lists. We present AudioQuilt, a system for sample exploration that allows audio clips to be sorted according to user taste, and arranged in any desired 2D formation such that similar samples are located near each other. Our method relies on two advances in machine learning. First, metric learning allows the user to shape the audio feature space to match their own preferences. Second, kernelized sorting finds an optimal arrangement for the samples in 2D. We demonstrate our system with three new interfaces for exploring audio samples, and evaluate the technology qualitatively and quantitatively via a pair of user studies.",
"title": ""
},
{
"docid": "9cf5fc6b50010d1489f12d161f302428",
"text": "With the advent of large code repositories and sophisticated search capabilities, code search is increasingly becoming a key software development activity. In this work we shed some light into how developers search for code through a case study performed at Google, using a combination of survey and log-analysis methodologies. Our study provides insights into what developers are doing and trying to learn when per- forming a search, search scope, query properties, and what a search session under different contexts usually entails. Our results indicate that programmers search for code very frequently, conducting an average of five search sessions with 12 total queries each workday. The search queries are often targeted at a particular code location and programmers are typically looking for code with which they are somewhat familiar. Further, programmers are generally seeking answers to questions about how to use an API, what code does, why something is failing, or where code is located.",
"title": ""
},
{
"docid": "60718ad958d65eb60a520d516f1dd4ea",
"text": "With the advent of the Internet, more and more public universities in Malaysia are putting in effort to introduce e-learning in their respective universities. Using a structured questionnaire derived from the literature, data was collected from 250 undergraduate students from a public university in Penang, Malaysia. Data was analyzed using AMOS version 16. The results of the structural equation model indicated that service quality (β = 0.20, p < 0.01), information quality (β = 0.37, p < 0.01) and system quality (β = 0.20, p < 0.01) were positively related to user satisfaction explaining a total of 45% variance. The second regression analysis was to examine the impact of user satisfaction on continuance intention. The results showed that satisfaction (β = 0.31, p < 0.01), system quality (β = 0.18, p < 0.01) and service quality (β = 0.30, p < 0.01) were positively related to continuance intention explaining 44% of the variance. Implications from these findings to e-learning system developers and implementers were further elaborated.",
"title": ""
},
{
"docid": "1e31afb6d28b0489e67bb63d4dd60204",
"text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.",
"title": ""
},
{
"docid": "e88c0e0fb76520ec323b90d8bd7ba64d",
"text": "The intestinal epithelium is the most rapidly self-renewing tissue in adult mammals. We have recently demonstrated the presence of about six cycling Lgr5+ stem cells at the bottoms of small-intestinal crypts. Here we describe the establishment of long-term culture conditions under which single crypts undergo multiple crypt fission events, while simultanously generating villus-like epithelial domains in which all differentiated cell types are present. Single sorted Lgr5+ stem cells can also initiate these cryptvillus organoids. Tracing experiments indicate that the Lgr5+ stem-cell hierarchy is maintained in organoids. We conclude that intestinal cryptvillus units are self-organizing structures, which can be built from a single stem cell in the absence of a non-epithelial cellular niche.",
"title": ""
},
{
"docid": "d0e8265bf57729b74375c9b476c4b028",
"text": "As experts in the health care of children and adolescents, pediatricians may be called on to advise legislators concerning the potential impact of changes in the legal status of marijuana on adolescents. Parents, too, may look to pediatricians for advice as they consider whether to support state-level initiatives that propose to legalize the use of marijuana for medical purposes or to decriminalize possession of small amounts of marijuana. This policy statement provides the position of the American Academy of Pediatrics on the issue of marijuana legalization, and the accompanying technical report (available online) reviews what is currently known about the relationship between adolescents' use of marijuana and its legal status to better understand how change might influence the degree of marijuana use by adolescents in the future.",
"title": ""
},
{
"docid": "3b45dbcb526574cc77f3a099b5a97cd9",
"text": "In this paper, we exploit a new multi-country historical dataset on public (government) debt to search for a systemic relationship between high public debt levels, growth and inflation. Our main result is that whereas the link between growth and debt seems relatively weak at “normal” debt levels, median growth rates for countries with public debt over roughly 90 percent of GDP are about one percent lower than otherwise; average (mean) growth rates are several percent lower. Surprisingly, the relationship between public debt and growth is remarkably similar across emerging markets and advanced economies. This is not the case for inflation. We find no systematic relationship between high debt levels and inflation for advanced economies as a group (albeit with individual country exceptions including the United States). By contrast, in emerging market countries, high public debt levels coincide with higher inflation. Our topic would seem to be a timely one. Public debt has been soaring in the wake of the recent global financial maelstrom, especially in the epicenter countries. This should not be surprising, given the experience of earlier severe financial crises. Outsized deficits and epic bank bailouts may be useful in fighting a downturn, but what is the long-run macroeconomic impact,",
"title": ""
},
{
"docid": "a261f45ef58363638b69616089386e1f",
"text": "This paper presents a new balancing control approach for regulating the center of mass position and trunk orientation of a bipedal robot in a compliant way. The controller computes a desired wrench (force and torque) required to recover the posture when an unknown external perturbation has changed the posture of the robot. This wrench is later distributed as forces at predefined contact points via a constrained optimization, which aims at achieving the desired wrench while minimizing the Euclidean norm of the contact forces. The formulation of the force distribution as an optimization problem is adopted from the grasping literature and allows to consider restrictions coming from the friction between the contact points and the ground.",
"title": ""
},
{
"docid": "fae8f50726c33390e0c49499af2509f0",
"text": "Abnormal bearer session release (i.e. bearer session drop) in cellular telecommunication networks may seriously impact the quality of experience of mobile users. The latest mobile technologies enable high granularity real-time reporting of all conditions of individual sessions, which gives rise to use data analytics methods to process and monetize this data for network optimization. One such example for analytics is Machine Learning (ML) to predict session drops well before the end of session. In this paper a novel ML method is presented that is able to predict session drops with higher accuracy than using traditional models. The method is applied and tested on live LTE data offline. The high accuracy predictor can be part of a SON function in order to eliminate the session drops or mitigate their effects.",
"title": ""
},
{
"docid": "7000ea96562204dfe2c0c23f7cdb6544",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "cb1a99cc1bb705d8ad5f26cc9a61e695",
"text": "In the smart grid system, dynamic pricing can be an efficient tool for the service provider which enables efficient and automated management of the grid. However, in practice, the lack of information about the customers' time-varying load demand and energy consumption patterns and the volatility of electricity price in the wholesale market make the implementation of dynamic pricing highly challenging. In this paper, we study a dynamic pricing problem in the smart grid system where the service provider decides the electricity price in the retail market. In order to overcome the challenges in implementing dynamic pricing, we develop a reinforcement learning algorithm. To resolve the drawbacks of the conventional reinforcement learning algorithm such as high computational complexity and low convergence speed, we propose an approximate state definition and adopt virtual experience. Numerical results show that the proposed reinforcement learning algorithm can effectively work without a priori information of the system dynamics.",
"title": ""
},
{
"docid": "f76eae1326c6767c520bc4d318b239fd",
"text": "A challenging goal of generative and developmental systems (GDS) is to effectively evolve neural networks as complex and capable as those found in nature. Two key properties of neural structures in nature are regularity and modularity. While HyperNEAT has proven capable of generating neural network connectivity patterns with regularities, its ability to evolve modularity remains in question. This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity. In particular, an extension is introduced called a Link Expression Output (HyperNEAT-LEO) that allows HyperNEAT to evolve the pattern of weights independently from the pattern of connection expression. Because HyperNEAT evolves such patterns as functions of geometry, important general topographic principles for organizing connectivity can be seeded into the initial population. For example, a key topographic concept in nature that encourages modularity is locality, that is, components of a module are located near each other. As experiments in this paper show, by seeding HyperNEAT with a bias towards local connectivity implemented through the LEO, modular structures arise naturally. Thus this paper provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.",
"title": ""
},
{
"docid": "4124c4c838d0c876f527c021a2c58358",
"text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants. Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.",
"title": ""
},
{
"docid": "951d3f81129ecafa2d271d4398d9b3e6",
"text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.",
"title": ""
},
{
"docid": "c9c98e50a49bbc781047dc425a2d6fa1",
"text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.",
"title": ""
},
{
"docid": "87c973e92ef3affcff4dac0d0183067c",
"text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.",
"title": ""
},
{
"docid": "717dd8e3c699d6cc22ba483002ab0a6f",
"text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing",
"title": ""
}
] |
scidocsrr
|
4aa50bf0557575b80725036d13d5a8f1
|
Flow-Based Propagators for the SEQUENCE and Related Global Constraints
|
[
{
"docid": "cdefeefa1b94254083eba499f6f502fb",
"text": "problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a \"problem\" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that \"solves\" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out \"expensive\" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its \"complexity,\" that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a \"standard\" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by",
"title": ""
}
] |
[
{
"docid": "85cabd8a0c19f5db993edd34ded95d06",
"text": "We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a “realistic” relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such conditional program generation are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on program sketches, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method.",
"title": ""
},
{
"docid": "5b7588f716ebb5908d60d4f89d393523",
"text": "The research and development in the field of magnetoresistive sensors has played an important role in the last few decades. Here, the authors give an introduction to the fundamentals of the anisotropic magnetoresistive (AMR) and the giant magnetoresistive (GMR) effect as well as an overview of various types of sensors in industrial applications. In addition, the authors present their recent work in this field, ranging from sensor systems fabricated on traditional substrate materials like silicon (Si), over new fabrication techniques for magnetoresistive sensors on flexible substrates for special applications, e.g., a flexible write head for component integrated data storage, micro-stamping of sensors on arbitrary surfaces or three dimensional sensing under extreme conditions (restricted mounting space in motor air gap, high temperatures during geothermal drilling).",
"title": ""
},
{
"docid": "e706c5071b87561f08ee8f9610e41e2e",
"text": "Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been proposed to limit the information provided to the adversary by omitting probability scores, significantly impacting the utility of the provided service. In this work, we illustrate how a service provider can still provide useful, albeit misleading, class probability information, while significantly limiting the success of the attack. Our defense forces the adversary to discard the class probabilities, requiring significantly more queries before they can train a model with comparable performance. We evaluate several attack strategies, model architectures, and hyperparameters under varying adversarial models, and evaluate the efficacy of our defense against the strongest adversary. Finally, we quantify the amount of noise injected into the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can degrade the accuracy of the stolen model at least 20%, or require up to 64 times more queries while keeping the accuracy of the protected model almost intact.",
"title": ""
},
{
"docid": "fdbfc5bf8af1478e919153fb6cde64f3",
"text": "Software development is conducted in increasingly dynamic business environments. Organizations need the capability to develop, release and learn from software in rapid parallel cycles. The abilities to continuously deliver software, to involve users, and to collect and prioritize their feedback are necessary for software evolution. In 2014, we introduced Rugby, an agile process model with workflows for continuous delivery and feedback management, and evaluated it in university projects together with industrial clients.\n Based on Rugby's release management workflow we identified the specific needs for project-based organizations developing mobile applications. Varying characteristics and restrictions in projects teams in corporate environments impact both process and infrastructure. We found that applicability and acceptance of continuous delivery in industry depend on its adaptability. To address issues in industrial projects with respect to delivery process, infrastructure, neglected testing and continuity, we extended Rugby's workflow and made it tailorable.\n Eight projects at Capgemini, a global provider of consulting, technology and outsourcing services, applied a tailored version of the workflow. The evaluation of these projects shows anecdotal evidence that the application of the workflow significantly reduces the time required to build and deliver mobile applications in industrial projects, while at the same time increasing the number of builds and internal deliveries for feedback.",
"title": ""
},
{
"docid": "edb16d2d75261724e7382d4013235d3c",
"text": "To improve vehicle path-following performance and to reduce driver workload, a human-centered feed-forward control (HCFC) system for a vehicle steering system is proposed. To be specific, a novel dynamic control strategy for the steering ratio of vehicle steering systems that treats vehicle speed, lateral deviation, yaw error, and steering angle as the inputs and a driver's expected steering ratio as the output is developed. To determine the parameters of the proposed dynamic control strategy, drivers are classified into three types according to the level of sensitivity to errors, i.e., low, middle, and high. The proposed HCFC system offers a human-centered steering system (HCSS) with a tunable steering gain, which can assist drivers in tracking a given path with smaller steering wheel angles and change rate of the angle by adaptively adjusting steering ratio according to driver's path-following characteristics, reducing the driver's workload. A series of experiments of tracking the centerline of double lane change (DLC) are conducted in CarSim and three different types of drivers are subsequently selected to test in a portable driving simulator under a fixed-speed condition. The simulation and experiment results show that the proposed HCSS with the dynamic control strategy, as compared with the classical control strategy of steering ratio, can improve task performance by about 7% and reduce the driver's physical workload and mental workload by about 35% and 50%, respectively, when following the given path.",
"title": ""
},
{
"docid": "97f6e18ea96e73559a05444d666f306f",
"text": "The increasingly ubiquitous availability of digital and networked tools has the potential to fundamentally transform the teaching and learning process. Research on the instructional uses of technology, however, has revealed that teachers often lack the knowledge to successfully integrate technology in their teaching and their attempts tend to be limited in scope, variety, and depth. Thus, technology is used more as “ef fi ciency aids and extension devices” (McCormick & Scrimshaw, 2001 , p. 31) rather than as tools that can “transform the nature of a subject at the most fundamental level” (p. 47). One way in which researchers have tried to better understand how teachers may better use technology in their classrooms has focused on the kinds of knowledge that teachers require Abstract In this chapter, we introduce a framework, called technological pedagogical content knowledge (or TPACK for short), that describes the kinds of knowledge needed by a teacher for effective technology integration. The TPACK framework emphasizes how the connections among teachers’ understanding of content, pedagogy, and technology interact with one another to produce effective teaching. Even as a relatively new framework, the TPACK framework has signi fi cantly in fl uenced theory, research, and practice in teacher education and teacher professional development. In this chapter, we describe the theoretical underpinnings of the framework, and explain the relationship between TPACK and related constructs in the educational technology literature. We outline the various approaches teacher educators have used to develop TPACK in preand in-service teachers, and the theoretical and practical issues that these professional development efforts have illuminated. We then review the widely varying approaches to measuring TPACK, with an emphasis on the interaction between form and function of the assessment, and resulting reliability and validity outcomes for the various approaches. We conclude with a summary of the key theoretical, pedagogical, and methodological issues related to TPACK, and suggest future directions for researchers, practitioners, and teacher educators.",
"title": ""
},
{
"docid": "79b26ac97deb39c4de11a87604003f26",
"text": "This paper presents a novel wheel-track-Leg hybrid Locomotion Mechanism that has a compact structure. Compared to most robot wheels that have a rigid round rim, the transformable wheel with a flexible rim can switch to track mode for higher efficiency locomotion on swampy terrain or leg mode for better over-obstacle capability on rugged road. In detail, the wheel rim of this robot is cut into four end-to-end circles to make it capable of transforming between a round circle with a flat ring (just like “O” and “∞”) to change the contact type between transformable wheels with the ground. The transformation principle and constraint conditions between different locomotion modes are explained. The driving methods and locomotion strategies on various terrains of the robot are analyzed. Meanwhile, an initial experiment is conducted to verify the design.",
"title": ""
},
{
"docid": "b18e65ad7982944ef9ad213d98d45dad",
"text": "This paper provides an overview of the physical layer specification of Advanced Television Systems Committee (ATSC) 3.0, the next-generation digital terrestrial broadcasting standard. ATSC 3.0 does not have any backwards-compatibility constraint with existing ATSC standards, and it uses orthogonal frequency division multiplexing-based waveforms along with powerful low-density parity check (LDPC) forward error correction codes similar to existing state-of-the-art. However, it introduces many new technological features such as 2-D non-uniform constellations, improved and ultra-robust LDPC codes, power-based layered division multiplexing to efficiently provide mobile and fixed services in the same radio frequency (RF) channel, as well as a novel frequency pre-distortion multiple-input single-output antenna scheme. ATSC 3.0 also allows bonding of two RF channels to increase the service peak data rate and to exploit inter-RF channel frequency diversity, and to employ dual-polarized multiple-input multiple-output antenna system. Furthermore, ATSC 3.0 provides great flexibility in terms of configuration parameters (e.g., 12 coding rates, 6 modulation orders, 16 pilot patterns, 12 guard intervals, and 2 time interleavers), and also a very flexible data multiplexing scheme using time, frequency, and power dimensions. As a consequence, ATSC 3.0 not only improves the spectral efficiency and robustness well beyond the first generation ATSC broadcast television standard, but also it is positioned to become the reference terrestrial broadcasting technology worldwide due to its unprecedented performance and flexibility. Another key aspect of ATSC 3.0 is its extensible signaling, which will allow including new technologies in the future without disrupting ATSC 3.0 services. This paper provides an overview of the physical layer technologies of ATSC 3.0, covering the ATSC A/321 standard that describes the so-called bootstrap, which is the universal entry point to an ATSC 3.0 signal, and the ATSC A/322 standard that describes the physical layer downlink signals after the bootstrap. A summary comparison between ATSC 3.0 and DVB-T2 is also provided.",
"title": ""
},
{
"docid": "ab400c41db805b1574e8db80f72e47bd",
"text": "Radiation from printed millimeter-wave antennas integrated in mobile terminals is affected by surface currents on chassis, guided waves trapped in dielectric layers, superstrates, and the user’s hand, making mobile antenna design for 5G communication challenging. In this paper, four canonical types of printed 28-GHz antenna elements are integrated in a 5G mobile terminal mock-up. Different kinds of terminal housing effects are examined separately, and the terminal housing effects are also diagnosed through equivalent currents by using the inverse source technique. To account for the terminal housing effects on a beam-scanning antenna subarray, we propose the effective beam-scanning efficiency to evaluate its coverage performance. This paper presents the detailed analysis, results, and new concepts regarding the terminal housing effects, and thereby provides valuable insight into the practical 5G mobile antenna design and radiation performance characterization.",
"title": ""
},
{
"docid": "71ecb98b204c2ca217ea7454110305ee",
"text": "Despite the recent advances in manufacturing automation, the role of human involvement in manufacturing systems is still regarded as a key factor in maintaining higher adaptability and flexibility. In general, however, modeling of human operators in manufacturing system design still considers human as a physical resource represented in statistical terms. In this paper, we propose a human in the loop (HIL) approach to investigate the operator’s choice complexity in a mixed model assembly line. The HIL simulation allows humans to become a core component of the simulation, therefore influencing the outcome in a way that is often impossible to reproduce via traditional simulation methods. At the initial stage, we identify the significant features affecting the choice complexity. The selected features are in turn used to build a regression model, in which human reaction time with regard to different degree of choice complexity serves as a response variable used to train and test the model. The proposed method, along with an illustrative case study, not only serves as a tool to quantitatively assess and predict the impact of choice complexity on operator’s effectiveness, but also provides an insight into how complexity can be mitigated without affecting the overall manufacturing throughput.",
"title": ""
},
{
"docid": "f00724247e49fcd372aec65e1b3c1855",
"text": "Bioconversion of lignocellulose by microbial fermentation is typically preceded by an acidic thermochemical pretreatment step designed to facilitate enzymatic hydrolysis of cellulose. Substances formed during the pretreatment of the lignocellulosic feedstock inhibit enzymatic hydrolysis as well as microbial fermentation steps. This review focuses on inhibitors from lignocellulosic feedstocks and how conditioning of slurries and hydrolysates can be used to alleviate inhibition problems. Novel developments in the area include chemical in-situ detoxification by using reducing agents, and methods that improve the performance of both enzymatic and microbial biocatalysts.",
"title": ""
},
{
"docid": "070ba5ca0e3ee7993e43af1df8b27f49",
"text": "OBJECTIVE\nThis study aimed to evaluate the reproducibility of a new grading system for lumbar foraminal stenosis.\n\n\nMATERIALS AND METHODS\nFour grades were developed for lumbar foraminal stenosis on the basis of sagittal MRI. Grade 0 refers to the absence of foraminal stenosis; grade 1 refers to mild foraminal stenosis showing perineural fat obliteration in the two opposing directions, vertical or transverse; grade 2 refers to moderate foraminal stenosis showing perineural fat obliteration in the four directions without morphologic change, both vertical and transverse directions; and grade 3 refers to severe foraminal stenosis showing nerve root collapse or morphologic change. A total of 576 foramina in 96 patients were analyzed (from L3-L4 to L5-S1). Two experienced radiologists independently assessed the sagittal MR images. Interobserver agreement between the two radiologists and intraobserver agreement by one reader were analyzed using kappa statistics.\n\n\nRESULTS\nAccording to reader 1, grade 1 foraminal stenosis was found in 33 foramina, grade 2 in six, and grade 3 in seven. According to reader 2, grade 1 foraminal stenosis was found in 32 foramina, grade 2 in six, and grade 3 in eight. Interobserver agreement in the grading of foraminal stenosis between the two readers was found to be nearly perfect (kappa value: right L3-L4, 1.0; left L3-L4, 0.905; right L4-L5, 0.929; left L4-L5, 0.942; right L5-S1, 0.919; and left L5-S1, 0.909). In intraobserver agreement by reader 1, grade 1 foraminal stenosis was found in 34 foramina, grade 2 in eight, and grade 3 in seven. Intraobserver agreement in the grading of foraminal stenosis was also found to be nearly perfect (kappa value: right L3-L4, 0.883; left L3-L4, 1.00; right L4-L5, 0.957; left L4-L5, 0.885; right L5-S1, 0.800; and left L5-S1, 0.905).\n\n\nCONCLUSION\nThe new grading system for foraminal stenosis of the lumbar spine showed nearly perfect interobserver and intraobserver agreement and would be helpful for clinical study and routine practice.",
"title": ""
},
{
"docid": "42a79b084dd18dafbe69aa3f0778158a",
"text": "This paper introduces an approach for dense 3D reconstruc7 7 tion from unregistered Internet-scale photo collections with about 3 mil8 8 lion of images within the span of a day on a single PC (“cloudless”). Our 9 9 method advances image clustering, stereo, stereo fusion and structure 10 10 from motion to achieve high computational performance. We leverage 11 11 geometric and appearance constraints to obtain a highly parallel imple12 12 mentation on modern graphics processors and multi-core architectures. 13 13 This leads to two orders of magnitude higher performance on an order 14 14 of magnitude larger dataset than competing state-of-the-art approaches. 15 15",
"title": ""
},
{
"docid": "d0148c8d12ac5bdb4afda5d702481180",
"text": "The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of highdimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, and we exploit this idea to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.",
"title": ""
},
{
"docid": "c5bb89954e511fcfc7820338d2a7d745",
"text": "Microblogging is a communication paradigm in which users post bits of information (brief text updates or micro media such as photos, video or audio clips) that are visible by their communities. When a user finds a “meme” of another user interesting, she can eventually repost it, thus allowing memes to propagate virally trough a social network. In this paper we introduce the meme ranking problem, as the problem of selecting which k memes (among the ones posted their contacts) to show to users when they log into the system. The objective is to maximize the overall activity of the network, that is, the total number of reposts that occur. We deeply characterize the problem showing that not only exact solutions are unfeasible, but also approximated solutions are prohibitive to be adopted in an on-line setting. Therefore we devise a set of heuristics and we compare them trough an extensive simulation based on the real-world Yahoo! Meme social graph, and with parameters learnt from real logs of meme propagations. Our experimentation demonstrates the effectiveness and feasibility of these methods.",
"title": ""
},
{
"docid": "b12049aac966497b17e075c2467151dd",
"text": "IV HLA-G and HLA-E alleles and RPL HLA-G and HLA-E gene polymorphism in patients with Idiopathic Recurrent Pregnancy Loss in Gaza strip",
"title": ""
},
{
"docid": "45ef23f40fd4241b58b8cb0810695785",
"text": "Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.",
"title": ""
},
{
"docid": "96af91aed1c131f1c8c9d8076ed5835d",
"text": "Hedge funds are unique among investment vehicles in that they are relatively unconstrained in their use of derivative investments, short-selling, and leverage. This flexibility allows investment managers to span a broad spectrum of distinct risks, such as momentum and option-like investments. Taking a revealed preference approach, we find that Capital Asset Pricing Model (CAPM) alpha explains hedge fund flows better than alphas from more sophisticated models. This result suggests that investors pool together sophisticated model alpha with returns from exposures to traditional and exotic risks. We decompose performance into traditional and exotic risk components and find that while investors chase both components, they place greater relative emphasis on returns associated with exotic risk exposures that can only be obtained through hedge funds. However, we find little evidence of persistence in performance from traditional or exotic risks, which cautions against investors’ practice of seeking out risk exposures following periods of recent success.",
"title": ""
},
{
"docid": "7f39974c1eb5dcecf2383ec9cd5abc42",
"text": "Edited volumes are an imperfect format for the presentation of ideas, not least because their goals vary. Sometimes they aim simply to survey the field, at other times to synthesize and advance the field. I prefer the former for disciplines that by their nature are not disposed to achieve definitive statements (philosophy, for example). A volume on an empirical topic, however, by my judgment falls short if it closes without firm conclusions, if not on the topic itself, at least on the state of the art of its study. Facial Attractiveness does fall short of this standard, but not for lack of serious effort (especially appreciated are such features as the summary table in Chapter 5). Although by any measure an excellent and thorough review of the major strands of its topic, the volume’s authors are often in such direct conflict that the reader is disappointed that the editors do not, in the end, provide sufficient guidance about where the most productive research avenues lie. Every contribution is persuasive, but as they cannot all be correct, who is to win the day? An obvious place to begin is with the question, What is “attractiveness”? Most writers seem unaware of the problem, and how it might impact their research methodology. What, the reader wants to know, is the most defensible conceptualization of the focal phenomenon? Often an author focuses explicitly on the aesthetic dimension of “attractive,” treating it as a synonym for “beauty.” A recurring phrase in the book is that “beauty is in the eye of the beholder,” with the authors undertaking to argue whether this standard accurately describes social reality. They reach contradictory conclusions. Chapter 1 (by Adam Rubenstein et al.) finds the maxim to be a “myth” which, by chapter’s end, is presumably dispelled; Anthony Little and his co-authors in Chapter 3, however, view their contribution as “help[ing] to place beauty back into the eye of the beholder.” Other chapters take intermediate positions. Besides the aesthetic, “attractive” can refer to raw sexual appeal, or to more long-term relationship evaluations. Which kind of attractiveness one intends will determine the proper methodology to use, and thereby impact the likely experimental results. As only one example, if one intends to investigate aesthetic attraction, the sexual orientation of the judges does not matter, whereas it matters a great deal if one intends to investigate sexual or relationship attraction. Yet no study discussed in these",
"title": ""
},
{
"docid": "e6c7713b9ff08aa01d98c9fec77ebf7a",
"text": "Everyday many users purchases product, book travel tickets, buy goods and services through web. Users also share their views about product, hotel, news, and topic on web in the form of reviews, blogs, comments etc. Many users read review information given on web to take decisions such as buying products, watching movie, going to restaurant etc. Reviews contain user's opinion about product, event or topic. It is difficult for web users to read and understand contents from large number of reviews. Important and useful information can be extracted from reviews through opinion mining and summarization process. We presented machine learning and Senti Word Net based method for opinion mining from hotel reviews and sentence relevance score based method for opinion summarization of hotel reviews. We obtained about 87% of accuracy of hotel review classification as positive or negative review by machine learning method. The classified and summarized hotel review information helps web users to understand review contents easily in a short time.",
"title": ""
}
] |
scidocsrr
|
7499b5a03a3196bd244aa6c21bf70e86
|
Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks
|
[
{
"docid": "5de0fcb624f4c14b1a0fe43c60d7d4ad",
"text": "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
"title": ""
},
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "8860af067ed1af9aba072d85f3e6171b",
"text": "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2× speedup respectively, which is significant.",
"title": ""
}
] |
[
{
"docid": "35da724255bbceb859d01ccaa0dec3b1",
"text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.",
"title": ""
},
{
"docid": "cc4c44d844dae98cc8c1f7681fd68357",
"text": "Sarcasm is considered one of the most difficult problem in sentiment analysis. In our observation on Indonesian social media, for certain topics, people tend to criticize something using sarcasm. Here, we proposed two additional features to detect sarcasm after a common sentiment analysis is conducted. The features are the negativity information and the number of interjection words. We also employed translated SentiWordNet in the sentiment classification. All the classifications were conducted with machine learning algorithms. The experimental results showed that the additional features are quite effective in the sarcasm detection.",
"title": ""
},
{
"docid": "01ccb35abf3eed71191dc8638e58f257",
"text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.",
"title": ""
},
{
"docid": "348115a5dddbc2bcdcf5552b711e82c0",
"text": "Enterococci are Gram-positive, catalase-negative, non-spore-forming, facultative anaerobic bacteria, which usually inhabit the alimentary tract of humans in addition to being isolated from environmental and animal sources. They are able to survive a range of stresses and hostile environments, including those of extreme temperature (5-65 degrees C), pH (4.5-10.0) and high NaCl concentration, enabling them to colonize a wide range of niches. Virulence factors of enterococci include the extracellular protein Esp and aggregation substances (Agg), both of which aid in colonization of the host. The nosocomial pathogenicity of enterococci has emerged in recent years, as well as increasing resistance to glycopeptide antibiotics. Understanding the ecology, epidemiology and virulence of Enterococcus species is important for limiting urinary tract infections, hepatobiliary sepsis, endocarditis, surgical wound infection, bacteraemia and neonatal sepsis, and also stemming the further development of antibiotic resistance.",
"title": ""
},
{
"docid": "dc92e3feb9ea6a20d73962c0905f623b",
"text": "Software maintenance consumes around 70% of the software life cycle. Improving software maintainability could save software developers significant time and money. This paper examines whether the pattern of dependency injection significantly reduces dependencies of modules in a piece of software, therefore making the software more maintainable. This hypothesis is tested with 20 sets of open source projects from sourceforge.net, where each set contains one project that uses the pattern of dependency injection and one similar project that does not use the pattern. The extent of the dependency injection use in each project is measured by a new Number of DIs metric created specifically for this analysis. Maintainability is measured using coupling and cohesion metrics on each project, then performing statistical analysis on the acquired results. After completing the analysis, no correlation was evident between the use of dependency injection and coupling and cohesion numbers. However, a trend towards lower coupling numbers in projects with a dependency injection count of 10% or more was observed.",
"title": ""
},
{
"docid": "c0daac9d635db0b833c079d6339111c0",
"text": "Technology is ever changing and ever growing. One of the newest developing technologies is augmented reality (AR), which can be applied to many different existing technologies, such as: computers, tablets, and smartphones. AR technology can also be utilized through wearable components, for example, glasses. Throughout this literature review on AR the following aspects are discussed at length: research explored, theoretical foundations, applications in education, challenges, reactions, and implications. Several different types of AR devices and applications are discussed at length, and an in-depth analysis is done on several studies that have implemented AR technology in an educational setting. This review focuses on how AR technology can be applied, the issues surrounding the use of this technology, viewpoints of those who have worked with AR applications; it also identifies multiple areas to be explored in future research.",
"title": ""
},
{
"docid": "06605d7a6538346f3bb0771fd3c92c12",
"text": "Measurements show that the IGBT is able to clamp the collector-emitter voltage to a certain value at short-circuit turn-off despite a very low gate turn-off resistor in combination with a high parasitic inductance is applied. The IGBT itself reduces the turn-off diC/dt by avalanche injection. However, device destructions during fast turn-off were observed which cannot be linked with an overvoltage failure mode. Measurements and semiconductor simulations of high-voltage IGBTs explain the self-clamping mechanism in detail. Possible failures which can be connected with filamentation processes are described. Options for improving the IGBT robustness during short-circuit turn-off are discussed.",
"title": ""
},
{
"docid": "7812d8ba8612aebb3c690b73174dbcb5",
"text": "An algorithm for creating smooth spline surfaces over irregular meshes is presented. The algorithm is a generalization of quadratic B-splines; that is, if a mesh is (locally) regular, the resulting surface is equivalent to a B-spline. Otherwise, the resulting surface has a degree 3 or 4 parametric polynomial representation. A construction is given for representing the surface as a collection of tangent plane continuous triangular Be´zier patches. The algorithm is simple, efficient, and generates aesthetically pleasing shapes.",
"title": ""
},
{
"docid": "687dbb03f675f0bf70e6defa9588ae23",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "ee61181cb9625868526eb608db0c58b4",
"text": "The primary focus of machine learning has traditionally been on learning from data assumed to be sufficient and representative of the underlying fixed, yet unknown, distribution. Such restrictions on the problem domain paved the way for development of elegant algorithms with theoretically provable performance guarantees. As is often the case, however, real-world problems rarely fit neatly into such restricted models. For instance class distributions are often skewed, resulting in the “class imbalance” problem. Data drawn from non-stationary distributions is also common in real-world applications, resulting in the “concept drift” or “non-stationary learning” problem which is often associated with streaming data scenarios. Recently, these problems have independently experienced increased research attention, however, the combined problem of addressing all of the above mentioned issues has enjoyed relatively little research. If the ultimate goal of intelligent machine learning algorithms is to be able to address a wide spectrum of real-world scenarios, then the need for a general framework for learning from, and adapting to, a non-stationary environment that may introduce imbalanced data can be hardly overstated. In this paper, we first present an overview of each of these challenging areas, followed by a comprehensive review of recent research for developing such a general framework.",
"title": ""
},
{
"docid": "4531b034f7644a6f5e925cda8cad875e",
"text": "This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.",
"title": ""
},
{
"docid": "b752b3c508f93a1e4b1b783fc45f8cc2",
"text": "Deep learning frameworks have achieved overwhelming superiority in many fields of pattern recognition in recent years. However, the application of deep learning frameworks in image steganalysis is still in its initial stage. In this paper we firstly proved that the convolution phase and the quantization & truncation phase are not learnable for deep neural networks. Then on the basis of the theoretical analysis, we proposed a new hybrid deep-learning framework for JPEG steganalysis, which is made up of two hybrid parts. The first part is hand-crafted. It corresponds to the the convolution phase and the quantization & truncation phase of the rich models. The second part is a compound deep neural network containing three CNN subnets in which the model parameters are learnable during the training procedure. We have conducted extensive experiments on large-scale dataset extracted from ImageNet. Primary dataset used in our experiments contains one million images, while our largest dataset contains ten million images. The large-scale experiments show that our proposed framework outperforms all other steganalytic models (hand-crafted or deeplearning based) in the literature. Furthermore, the experimental results revealed that our proposed framework possesses some great features, including well attacking-target transfer ability and insensitive to altered JPEG block artifact.",
"title": ""
},
{
"docid": "e48da0cf3a09b0fd80f0c2c01427a931",
"text": "Timely analysis of information in cybersecurity necessitates automated information extraction from unstructured text. Unfortunately, state-of-the-art extraction methods require training data, which is unavailable in the cyber-security domain. To avoid the arduous task of handlabeling data, we develop a very precise method to automatically label text from several data sources by leveraging article-specific structured data and provide public access to corpus annotated with cyber-security entities. We then prototype a maximum entropy model that processes this corpus of auto-labeled text to label new sentences and present results showing the Collins Perceptron outperforms the MLE with LBFGS and OWL-QN optimization for parameter fitting. The main contribution of this paper is an automated technique for creating a training corpus from text related to a database. As a multitude of domains can benefit from automated extraction of domain-specific concepts for which no labeled data is available, we hope our solution is widely applicable.",
"title": ""
},
{
"docid": "07b362c7f6e941513cfbafce1ba87db1",
"text": "ResearchGate is increasingly used by scholars to upload the full-text of their articles and make them freely available for everyone. This study aims to investigate the extent to which ResearchGate members as authors of journal articles comply with publishers’ copyright policies when they self-archive full-text of their articles on ResearchGate. A random sample of 500 English journal articles available as full-text on ResearchGate were investigated. 108 articles (21.6%) were open access (OA) published in OA journals or hybrid journals. Of the remaining 392 articles, 61 (15.6%) were preprint, 24 (6.1%) were post-print and 307 (78.3%) were published (publisher) PDF. The key finding was that 201 (51.3%) out of 392 non-OA articles infringed the copyright and were non-compliant with publishers’ policy. While 88.3% of journals allowed some form of self-archiving (SHERPA/RoMEO green, blue or yellow journals), the majority of non-compliant cases (97.5%) occurred when authors self-archived publishers’ PDF files (final published version). This indicates that authors infringe copyright most of the time not because they are not allowed to self-archive, but because they use the wrong version, which might imply their lack of understanding of copyright policies and/or complexity and diversity of policies.",
"title": ""
},
{
"docid": "ddeb70a9abd07b113c8c7bfcf2f535b6",
"text": "Implementation of authentic leadership can affect not only the nursing workforce and the profession but the healthcare delivery system and society as a whole. Creating a healthy work environment for nursing practice is crucial to maintain an adequate nursing workforce; the stressful nature of the profession often leads to burnout, disability, and high absenteeism and ultimately contributes to the escalating shortage of nurses. Leaders play a pivotal role in retention of nurses by shaping the healthcare practice environment to produce quality outcomes for staff nurses and patients. Few guidelines are available, however, for creating and sustaining the critical elements of a healthy work environment. In 2005, the American Association of Critical-Care Nurses released a landmark publication specifying 6 standards (skilled communication, true collaboration, effective decision making, appropriate staffing, meaningful recognition, and authentic leadership) necessary to establish and sustain healthy work environments in healthcare. Authentic leadership was described as the \"glue\" needed to hold together a healthy work environment. Now, the roles and relationships of authentic leaders in the healthy work environment are clarified as follows: An expanded definition of authentic leadership and its attributes (eg, genuineness, trustworthiness, reliability, compassion, and believability) is presented. Mechanisms by which authentic leaders can create healthy work environments for practice (eg, engaging employees in the work environment to promote positive behaviors) are described. A practical guide on how to become an authentic leader is advanced. A research agenda to advance the study of authentic leadership in nursing practice through collaboration between nursing and business is proposed.",
"title": ""
},
{
"docid": "60b21a7b9f0f52f48ae2830db600fa24",
"text": "The multi-armed bandit problem for a gambler is to decide which arm of a K-slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. This paper provides a preliminary empirical evaluation of several multiarmed bandit algorithms. It also describes and analyzes a new algorithm, Poker (Price Of Knowledge and Estimated Reward) whose performance compares favorably to that of other existing algorithms in several experiments. One remarkable outcome of our experiments is that the most naive approach, the -greedy strategy, proves to be often hard to beat.",
"title": ""
},
{
"docid": "38cb7fa09dc3d350971ffd43087d372c",
"text": "Objectives. The purpose of this study was to describe changes in critical thinking ability and disposition over a 4-year Doctor of Pharmacy curriculum. Methods. Two standardized tests, the California Critical Thinking Skills Test (CCTST) and California Critical Thinking Dispositions Inventory (CCTDI) were used to follow the development of critical thinking ability and disposition during a 4-year professional pharmacy program. The tests were given to all pharmacy students admitted to the PharmD program at the College of Pharmacy of North Dakota State University (NDSU) on the first day of classes, beginning in 1997, and repeated late in the spring semester each year thereafter. Results. Increases in CCTST scores were noted as students progressed through each year of the curriculum, with a 14% total increase by graduation (P< 0.001). That the increase was from a testing effect is unlikely because students who took a different version at graduation scored no differently than students who took the original version. There was no increase in CCTDI score. Conclusion. The generic critical thinking ability of pharmacy students at NDSU’s College of Pharmacy appeared to increase over the course of the program, while their motivation to think critically did not appear to increase.",
"title": ""
},
{
"docid": "5d9106a06f606cefb3b24fb14c72d41a",
"text": "Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs. In this paper, we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-theart relation extraction models when such clues are applicable to the datasets. And, we find that the clues learnt automatically from existing knowledge bases perform comparably to those refined by human.",
"title": ""
},
{
"docid": "0b72a85c7ae06e0ad63f2966d56d4d2a",
"text": "Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification. Such alignment reduces undesired variability due to factors such as pose, while only requiring weak supervision in the form of poorly aligned examples. However, prior work on unsupervised alignment of complex, real-world images has required the careful selection of feature representation based on hand-crafted image descriptors, in order to achieve an appropriate, smooth optimization landscape. In this paper, we instead propose a novel combination of unsupervised joint alignment with unsupervised feature learning. Specifically, we incorporate deep learning into the congealing alignment framework. Through deep learning, we obtain features that can represent the image at differing resolutions based on network depth, and that are tuned to the statistics of the specific data being aligned. In addition, we modify the learning algorithm for the restricted Boltzmann machine by incorporating a group sparsity penalty, leading to a topographic organization of the learned filters and improving subsequent alignment results. We apply our method to the Labeled Faces in the Wild database (LFW). Using the aligned images produced by our proposed unsupervised algorithm, we achieve higher accuracy in face verification compared to prior work in both unsupervised and supervised alignment. We also match the accuracy for the best available commercial method.",
"title": ""
},
{
"docid": "939b2faa63e24c0f303b823481682c4c",
"text": "Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral 'form' (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion.",
"title": ""
}
] |
scidocsrr
|
30b65372568a42a27adee77a0e0fed25
|
Incentives for Mobile Crowd Sensing: A Survey
|
[
{
"docid": "acdcdae606f9c046aab912075d4ec609",
"text": "Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SEQTGREEDY, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.",
"title": ""
},
{
"docid": "a1367b21acfebfe35edf541cdc6e3f48",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "382ed9f0bbc8492d6aa10917dd3a53d0",
"text": "Can WiFi signals be used for sensing purpose? The growing PHY layer capabilities of WiFi has made it possible to reuse WiFi signals for both communication and sensing. Sensing via WiFi would enable remote sensing without wearable sensors, simultaneous perception and data transmission without extra communication infrastructure, and contactless sensing in privacy-preserving mode. Due to the popularity of WiFi devices and the ubiquitous deployment of WiFi networks, WiFi-based sensing networks, if fully connected, would potentially rank as one of the world’s largest wireless sensor networks. Yet the concept of wireless and sensorless sensing is not the simple combination of WiFi and radar. It seeks breakthroughs from dedicated radar systems, and aims to balance between low cost and high accuracy, to meet the rising demand for pervasive environment perception in everyday life. Despite increasing research interest, wireless sensing is still in its infancy. Through introductions on basic principles and working prototypes, we review the feasibilities and limitations of wireless, sensorless, and contactless sensing via WiFi. We envision this article as a brief primer on wireless sensing for interested readers to explore this open and largely unexplored field and create next-generation wireless and mobile computing applications.",
"title": ""
},
{
"docid": "bdadf0088654060b3f1c749ead0eea6e",
"text": "This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.",
"title": ""
}
] |
[
{
"docid": "44b71e1429f731cc2d91f919182f95a4",
"text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.",
"title": ""
},
{
"docid": "ffc2079d68489ea7fae9f55ffd288018",
"text": "Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.",
"title": ""
},
{
"docid": "b324860905b6d8c4b4a8429d53f2543d",
"text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.",
"title": ""
},
{
"docid": "78f1b3a8b9aeff9fb860b46d6a2d8eab",
"text": "We study the possibility to extend the concept of linguistic data summaries employing the notion of bipolarity. Yager's linguistic summaries may be derived using a fuzzy linguistic querying interface. We look for a similar analogy between bipolar queries and the extended form of linguistic summaries. The general concept of bipolar query, and its special interpretation are recalled, which turns out to be applicable to accomplish our goal. Some preliminary results are presented and possible directions of further research are pointed out.",
"title": ""
},
{
"docid": "7c81ddf6b7e6853ac1d964f1c0accd40",
"text": "DSM-5 distinguishes between paraphilias and paraphilic disorders. Paraphilias are defined as atypical, yet not necessarily disordered, sexual practices. Paraphilic disorders are instead diseases, which include distress, impairment in functioning, or entail risk of harm one's self or others. Hence, DSM-5 new approach to paraphilias demedicalizes and destigmatizes unusual sexual behaviors, provided they are not distressing or detrimental to self or others. Asphyxiophilia, a dangerous and potentially deadly form of sexual masochism involving sexual arousal by oxygen deprivation, are clearly described as disorders. Although autoerotic asphyxia has been associated with estimated mortality rates ranging from 250 to 1000 deaths per year in the United States, in Italy, knowledge on this condition is very poor. Episodes of death caused by autoerotic asphyxia seem to be underestimated because it often can be confounded with suicide cases, particularly in the Italian context where family members of the victim often try to disguise autoerotic behaviors of the victims. The current paper provides a review on sexual masochism disorder with asphyxiophilia and discusses one specific case as an example to examine those conditions that may or may not influence the likelihood that death from autoerotic asphyxia be erroneously reported as suicide or accidental injury.",
"title": ""
},
{
"docid": "d822157e1fd65e8ec6da4601deb65b06",
"text": "Bartholin's duct cysts and gland abscesses are common problems in women of reproductive age. Bartholin's glands are located bilaterally at the posterior introitus and drain through ducts that empty into the vestibule at approximately the 4 o'clock and 8 o'clock positions. These normally pea-sized glands are palpable only if the duct becomes cystic or a gland abscess develops. The differential diagnosis includes cystic and solid lesions of the vulva, such as epidermal inclusion cyst, Skene's duct cyst, hidradenoma papilliferum, and lipoma. The goal of management is to preserve the gland and its function if possible. Office-based procedures include insertion of a Word catheter for a duct cyst or gland abscess, and marsupialization of a cyst; marsupialization should not be used to treat a gland abscess. Broad-spectrum antibiotic therapy is warranted only when cellulitis is present. Excisional biopsy is reserved for use in ruling out adenocarcinoma in menopausal or perimenopausal women with an irregular, nodular Bartholin's gland mass.",
"title": ""
},
{
"docid": "40555c2dc50a099ff129f60631f59c0d",
"text": "As new technologies and information delivery systems emerge, the way in which individuals search for information to support research, teaching, and creative activities is changing. To understand different aspects of researchers’ information-seeking behavior, this article surveyed 2,063 academic researchers in natural science, engineering, and medical science from five research universities in the United States. A Web-based, in-depth questionnaire was designed to quantify researchers’ information searching, information use, and information storage behaviors. Descriptive statistics are reported.",
"title": ""
},
{
"docid": "cb85db604bf21751766daf3751dd73bd",
"text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.",
"title": ""
},
{
"docid": "1705ba479a7ff33eef46e0102d4d4dd0",
"text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.",
"title": ""
},
{
"docid": "5a46d347e83aec7624dde84ecdd5302c",
"text": "This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014).",
"title": ""
},
{
"docid": "057efe6414f7a38f2c8580f8f507c9d0",
"text": "Film and television play an important role in popular culture. Their study, however, often requires watching and annotating video, a time-consuming process too expensive to run at scale. In this paper we study the evolution of different roles over time at a large scale by using media database cast lists. In particular, we focus on the gender distribution of those roles and how this changes over time. We compare real-life employment gender distributions to our web-mediated onscreen gender data and also investigate how gender role biases differ between film and television. We propose that these methodologies are a useful complement to traditional analysis and allow researchers to explore onscreen gender depictions using online evidence.",
"title": ""
},
{
"docid": "334a7f34bca3452bb472d9071705c2bc",
"text": "This paper addresses the analysis of oscillator phase-noise effects on the self-interference cancellation capability of full-duplex direct-conversion radio transceivers. Closed-form solutions are derived for the power of the residual self-interference stemming from phase noise in two alternative cases of having either independent oscillators or the same oscillator at the transmitter and receiver chains of the full-duplex transceiver. The results show that phase noise has a severe effect on self-interference cancellation in both of the considered cases, and that by using the common oscillator in upconversion and downconversion results in clearly lower residual self-interference levels. The results also show that it is in general vital to use high quality oscillators in full-duplex transceivers, or have some means for phase noise estimation and mitigation in order to suppress its effects. One of the main findings is that in practical scenarios the subcarrier-wise phase-noise spread of the multipath components of the self-interference channel causes most of the residual phase-noise effect when high amounts of self-interference cancellation is desired.",
"title": ""
},
{
"docid": "4122d900e0f527d4e9ed1005a68b95bf",
"text": "We present a method that learns to tell rear signals from a number of frames using a deep learning framework. The proposed framework extracts spatial features with a convolution neural network (CNN), and then applies a long short term memory (LSTM) network to learn the long-term dependencies. The brake signal classifier is trained using RGB frames, while the turn signal is recognized via a two-step localization approach. The two separate classifiers are learned to recognize the static brake signals and the dynamic turn signals. As a result, our recognition system can recognize 8 different rear signals via the combined two classifiers in real-world traffic scenes. Experimental results show that our method is able to obtain more accurate predictions than using only the CNN to classify rear signals with time sequence inputs.",
"title": ""
},
{
"docid": "f31b3c4a2a8f3f05c3391deb1660ce75",
"text": "In the field of providing mobility for the elderly or disabled the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting continued development of the “Nagasaki Stairclimber”, a duel section tracked wheelchair capable of negotiating the large number of twisting and irregular stairs typically encounted by the residents living on the slopes that surround the Nagasaki harbor. Recent developments include an auto guidance system, auto leveling of the chair angle and active control of the frontrear track angle.",
"title": ""
},
{
"docid": "a3f5d2fb8bfa71b6f974a871a4ae2e5f",
"text": "Recent years have witnessed the popularity of using recurrent neural network (RNN) for action recognition in videos. However, videos are of high dimensionality and contain rich human dynamics with various motion scales, which makes the traditional RNNs difficult to capture complex action information. In this paper, we propose a novel recurrent spatial-temporal attention network (RSTAN) to address this challenge, where we introduce a spatial-temporal attention mechanism to adaptively identify key features from the global video context for every time-step prediction of RNN. More specifically, we make three main contributions from the following aspects. First, we reinforce the classical long short-term memory (LSTM) with a novel spatial-temporal attention module. At each time step, our module can automatically learn a spatial-temporal action representation from all sampled video frames, which is compact and highly relevant to the prediction at the current step. Second, we design an attention-driven appearance-motion fusion strategy to integrate appearance and motion LSTMs into a unified framework, where LSTMs with their spatial-temporal attention modules in two streams can be jointly trained in an end-to-end fashion. Third, we develop actor-attention regularization for RSTAN, which can guide our attention mechanism to focus on the important action regions around actors. We evaluate the proposed RSTAN on the benchmark UCF101, HMDB51 and JHMDB data sets. The experimental results show that, our RSTAN outperforms other recent RNN-based approaches on UCF101 and HMDB51 as well as achieves the state-of-the-art on JHMDB.",
"title": ""
},
{
"docid": "15c3ddb9c01d114ab7d09f010195465b",
"text": "In this paper we have described a solution for supporting independent living of the elderly by means of equipping their home with a simple sensor network to monitor their behaviour. Standard home automation sensors including movement sensors and door entry point sensors are used. By monitoring the sensor data, important information regarding any anomalous behaviour will be identified. Different ways of visualizing large sensor data sets and representing them in a format suitable for clustering the abnormalities are also investigated. In the latter part of this paper, recurrent neural networks are used to predict the future values of the activities for each sensor. The predicted values are used to inform the caregiver in case anomalous behaviour is predicted in the near future. Data collection, classification and prediction are investigated in real home environments with elderly occupants suffering from dementia.",
"title": ""
},
{
"docid": "ba8ae795796d9d5c1d33d4e5ce692a13",
"text": "This work presents a type of capacitive sensor for intraocular pressure (IOP) measurement on soft contact lens with Radio Frequency Identification (RFID) module. The flexible capacitive IOP sensor and Rx antenna was designed and fabricated using MEMS fabrication technologies that can be embedded on a soft contact lens. The IOP sensing unit is a sandwich structure composed of parylene C as the substrate and the insulating layer, gold as the top and bottom electrodes of the capacitor, and Hydroxyethylmethacrylate (HEMA) as dielectric material between top plate and bottom plate. The main sensing principle is using wireless IOP contact lenses sensor (CLS) system placed on corneal to detect the corneal deformation caused due to the variations of IOP. The variations of intraocular pressure will be transformed into capacitance change and this change will be transmitted to RFID system and recorded as continuous IOP monitoring. The measurement on in-vitro porcine eyes show the pressure reproducibility and a sensitivity of 0.02 pF/4.5 mmHg.",
"title": ""
},
{
"docid": "b8c8511489622220f9347daede5f31e8",
"text": "Recently, different systems which learn to populate and extend a knowledge base (KB) from the web in different languages have been presented. Although a large set of concepts should be learnt independently from the language used to read, there are facts which are expected to be more easily gathered in local language (e.g., culture or geography). A system that merges KBs learnt in different languages will benefit from the complementary information as long as common beliefs are identified, as well as from redundancy present in web pages written in different languages. In this paper, we deal with the problem of identifying equivalent beliefs (or concepts) across language specific KBs, assuming that they share the same ontology of categories and relations. In a case study with two KBs independently learnt from different inputs, namely web pages written in English and web pages written in Portuguese respectively, we report on the results of two methodologies: an approach based on personalized PageRank and an inference technique to find out common relevant paths through the KBs. The proposed inference technique efficiently identifies relevant paths, outperforming the baseline (a dictionary-based classifier) in the vast majority of tested categories.",
"title": ""
},
{
"docid": "6200d3c4435ae34e912fc8d2f92e904b",
"text": "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.",
"title": ""
},
{
"docid": "60f2baba7922543e453a3956eb503c05",
"text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.",
"title": ""
}
] |
scidocsrr
|
511653bb58f0bf4a5a70c73a421f288b
|
Who should I cite: learning literature search models from citation behavior
|
[
{
"docid": "a9f9f918d0163e18cf6df748647ffb05",
"text": "In previous work, we have shown that using terms from around citations in citing papers to index the cited paper, in addition to the cited paper's own terms, can improve retrieval effectiveness. Now, we investigate how to select text from around the citations in order to extract good index terms. We compare the retrieval effectiveness that results from a range of contexts around the citations, including no context, the entire citing paper, some fixed windows and several variations with linguistic motivations. We conclude with an analysis of the benefits of more complex, linguistically motivated methods for extracting citation index terms, over using a fixed window of terms. We speculate that there might be some advantage to using computational linguistic techniques for this task.",
"title": ""
},
{
"docid": "209de57ac23ab35fa731b762a10f782a",
"text": "Although fully generative models have been successfully used to model the contents of text documents, they are often awkward to apply to combinations of text data and document metadata. In this paper we propose a Dirichlet-multinomial regression (DMR) topic model that includes a log-linear prior on document-topic distributions that is a function of observed features of the document, such as author, publication venue, references, and dates. We show that by selecting appropriate features, DMR topic models can meet or exceed the performance of several previously published topic models designed for specific data.",
"title": ""
}
] |
[
{
"docid": "b581717dca731a6fd216d8d4d9530b9c",
"text": "In the last few years, there has been increasing interest from the agent community in the use of techniques from decision theory and game theory. Our aims in this article are firstly to briefly summarize the key concepts of decision theory and game theory, secondly to discuss how these tools are being applied in agent systems research, and finally to introduce this special issue of Autonomous Agents and Multi-Agent Systems by reviewing the papers that appear.",
"title": ""
},
{
"docid": "15007058c522192794ae019cd6d11716",
"text": "An active matrix organic light emitting diode (AMOLED) display driver IC, enabling real-time thin-film transistor (TFT) nonuniformity compensation, is presented with a hybrid driving method to satisfy fast driving speed, high TFT current accuracy, and a high aperture ratio. The proposed hybrid column-driver IC drives a mobile UHD (3840 × 2160) AMOLED panel, with one horizontal time of 7.7 μs at a scan frequency of 60 Hz, simultaneously senses the TFT current for back-end TFT variation compensation. Due to external compensation, a simple 3T1C pixel circuit is employed in each pixel. Accurate current sensing and high panel noise immunity is guaranteed by a proposed current-sensing circuit. By reusing the hybrid column-driver circuitries, the driver embodies an 8 bit current-mode ADC to measure OLED V -I transfer characteristic for OLED luminance-degradation compensation. Measurement results show that the hybrid driving method reduces the maximum current error between two emulated TFTs with a 60 mV threshold voltage difference under 1 gray-level error of 0.94 gray level (37 nA) in 8 bit gray scales from 12.82 gray level (501 nA). The circuit-reused current-mode ADC achieves 0.56 LSB DNL and 0.75 LSB INL.",
"title": ""
},
{
"docid": "c34c6e462c8097a4acaafbf94341b0b0",
"text": "Crowdsourcing has gained immense popularity in machine learning applications for obtaining large amounts of labeled data. Crowdsourcing is cheap and fast, but suffers from the problem of low-quality data. To address this fundamental challenge in crowdsourcing, we propose a simple payment mechanism to incentivize workers to answer only the questions that they are sure of and skip the rest. We show that surprisingly, under a mild and natural “no-free-lunch” requirement, this mechanism is the one and only incentive-compatible payment mechanism possible. We also show that among all possible incentive-compatible mechanisms (that may or may not satisfy no-free-lunch), our mechanism makes the smallest possible payment to spammers. Interestingly, this unique mechanism takes a “multiplicative” form. The simplicity of the mechanism is an added benefit. In preliminary experiments involving over several hundred workers, we observe a significant reduction in the error rates under our unique mechanism for the same or lower monetary expenditure.",
"title": ""
},
{
"docid": "44368062de68f6faed57d43b8e691e35",
"text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.",
"title": ""
},
{
"docid": "bd8ae67f959a7b840eff7e8c400a41e0",
"text": "Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.",
"title": ""
},
{
"docid": "74ad81f571bf7824a4144497027aa8cb",
"text": "Community co-creation programs are increasingly used by cultural institutions in an attempt to draw new audiences to their collections. By providing engaging interactive experiences in partnership with the community, institutions may well increase their audience numbers in the short term; but to optimize the viability and longevity of such programs, institutions and designers should consider the integration of strategic design methods with curatorial processes in order to reconsider the capture, display and promotion of collections and/or exhibitions. This case study uses a project from the State Library of Queensland, Australia to showcase a human computer interaction-derived design method developed by the authors to ensure a strategic response to community co-creation initiatives. Using a variety of media, the new Multi-Platform Communication Design method has enabled the design of web-based distribution; a community and a facilitator's training program; and the development of a mobile multimedia laboratory. This paper details the design method by which these multiple communication platforms were developed and implemented to achieve successful project delivery.",
"title": ""
},
{
"docid": "241d7da91d5b48d415040b44b128ec33",
"text": "Dieser Beitrag beschreibt eine neuartige Mobilfunktechnologie, mit der sich innovative und besonders latenzsensitive Dienste in Mobilfunknetzen realisieren lassen. Dieser Artikel geht auf die technischen Eigenschaften der sogenannten Mobile Edge Computing-Technologie ein und beschreibt deren Architektur und Integrationsmöglichkeiten. Ferner werden konkrete – sowohl angedachte als auch bereits realisierte – Beispiele und Szenarien vorgestellt, die durch den Einsatz der Mobile Edge Computing-Technologie ermöglicht werden.",
"title": ""
},
{
"docid": "72138b8acfb7c9e11cfd92c0b78a737c",
"text": "We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "5dad207fe80469fe2b80d1f1e967575e",
"text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.",
"title": ""
},
{
"docid": "d717a5955faf08583b946385cf9f41d3",
"text": "Spasticity is a prevalent and potentially disabling symptom common in individuals with multiple sclerosis. Adequate evaluation and management of spasticity requires a careful assessment of the patient's history to determine functional impact of spasticity and potential exacerbating factors, and physical examination to determine the extent of the condition and culpable muscles. A host of options for spasticity management are available: therapeutic exercise, physical modalities, complementary/alternative medicine interventions, oral medications, chemodenervation, and implantation of an intrathecal baclofen pump. Choice of treatment hinges on a combination of the extent of symptoms, patient preference, and availability of services.",
"title": ""
},
{
"docid": "1f1fd7217ed5bae04f9ac6f8ccc8c23f",
"text": "Relating the brain's structural connectivity (SC) to its functional connectivity (FC) is a fundamental goal in neuroscience because it is capable of aiding our understanding of how the relatively fixed SC architecture underlies human cognition and diverse behaviors. With the aid of current noninvasive imaging technologies (e.g., structural MRI, diffusion MRI, and functional MRI) and graph theory methods, researchers have modeled the human brain as a complex network of interacting neuronal elements and characterized the underlying structural and functional connectivity patterns that support diverse cognitive functions. Specifically, research has demonstrated a tight SC-FC coupling, not only in interregional connectivity strength but also in network topologic organizations, such as community, rich-club, and motifs. Moreover, this SC-FC coupling exhibits significant changes in normal development and neuropsychiatric disorders, such as schizophrenia and epilepsy. This review summarizes recent progress regarding the SC-FC relationship of the human brain and emphasizes the important role of large-scale brain networks in the understanding of structural-functional associations. Future research directions related to this topic are also proposed.",
"title": ""
},
{
"docid": "440858614aba25dfa9039b20a1caefc4",
"text": "A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image verify the interpretability of RTT-GAN.",
"title": ""
},
{
"docid": "0736f2a97b09d76dd04af63b97dbe42e",
"text": "There are a range of barriers precluding patients from fully engaging in and benefiting from the spectrum of eHealth interventions developed to support patient access to health information, disease self-management efforts, and patient-provider communication. Consumers with low eHealth literacy skills often stand to gain the greatest benefit from the use of eHealth tools. eHealth skills are comprised of reading/writing/numeracy skills, health literacy, computer literacy, information literacy, media literacy, and scientific literacy [1]. We aim to develop an approach to characterize dimensions of complexity and to reveal knowledge and skill-related barriers to eHealth engagement. We use Bloom's Taxonomy to guide development of an eHealth literacy taxonomy that categorizes and describes each type of literacy by complexity level. Illustrative examples demonstrate the utility of the taxonomy in characterizing dimensions of complexity of eHealth skills used and associated with each step in completing an eHealth task.",
"title": ""
},
{
"docid": "99c1b5ed924012118e72475dee609b3d",
"text": "Lack of trust in online transactions has been cited, by past scholars, as the main reason for the abhorrence of online shopping. In this paper we proposed a model and provided empirical evidence on the impact of the website characteristics on trust in online transactions in Indian context. In the first phase, we identified and empirically verified the relative importance of the website factors that develop online trust in India. In the next phase, we have tested the mediator effect of trust in the relationship between the website factors and purchase intention (and perceived risk). The present study for the first time provided empirical evidence on the mediating role of trust in online shopping among Indian customers.",
"title": ""
},
{
"docid": "c60c83c93577377bad43ed1972079603",
"text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module",
"title": ""
},
{
"docid": "c04dd7ccb0426ef5d44f0420d321904d",
"text": "In this paper, we introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture temporal structure in continuous activity videos. Our layer is designed to allow the model to learn a latent hierarchy of sub-event intervals. Our approach is fully differentiable while relying on a significantly less number of parameters, enabling its end-to-end training with standard backpropagation. We present our convolutional video models with multiple TGM layers for activity detection. Our experiments on multiple datasets including Charades and MultiTHUMOS confirm the benefit of our TGM layers, illustrating that it outperforms other models and temporal convolutions.",
"title": ""
},
{
"docid": "28f61d005f1b53ad532992e30b9b9b71",
"text": "We propose a method for nonlinear residual echo suppression that consists of extracting spectral features from the far-end signal, and using an artificial neural network to model the residual echo magnitude spectrum from these features. We compare the modeling accuracy achieved by realizations with different features and network topologies, evaluating the mean squared error of the estimated residual echo magnitude spectrum. We also present a low complexity real-time implementation combining an offline-trained network with online adaptation, and investigate its performance in terms of echo suppression and speech distortion for real mobile phone recordings.",
"title": ""
},
{
"docid": "e125bd3935aace0b17f8ed4e431add63",
"text": "Institutions, companies and organisations where security and net productivity is vital, access to certain areas must be controlled and monitored through an automated system of attendance. Managing people is a difficult task for most of the organizations and maintaining the attendance record is an important factor in people management. When considering the academic institute, taking the attendance of non-academic staff on daily basis and maintaining the records is a major task. Manually taking attendance and maintaining it for a long time adds to the difficulty of this task as well as wastes a lot of time. For this reason, an efficient system is proposed in this paper to solve the problem of manual attendance. This system takes attendance electronically with the help of a fingerprint recognition system, and all the records are saved for subsequent operations. Staff biometric attendance system employs an automated system to calculate attendance of staff in an organization and do further calculations of monthly attendance summary in order to reduce human errors during calculations. In essence, the proposed system can be employed in curbing the problems of lateness, buddy punching and truancy in any institution, organization or establishment. The proposed system will also improve the productivity of any organization if properly implemented.",
"title": ""
},
{
"docid": "87864607cd9d676c7919d18cf619b1a4",
"text": "This work describes the preparation of a glassy carbon electrode (GCE) modified with molecularly imprinted polymer (MIP) and multiwalled carbon nanotubes (MWCNTs) for determination of carvedilol (CAR). Electrochemical behavior of CAR on the modified electrode was evaluated using cyclic voltammetry. The best composition was found to be 65% (m/m) of MIP. Under optimized conditions (pH 8.5 in 0.25 mol·L−1 Britton–Robinson buffer and 0.1 mol·L−1 KCl) the voltammetric method showed a linear response for CAR in the range of 50–325 μmol·L−1 (R = 0.9755), with detection and quantification limits of 16.14 μmol·L−1 and 53.8 μmol·L−1, respectively. The developed method was successfully applied for determination of CAR in real samples of pharmaceuticals. The sensor presented good sensitivity, rapid detection of CAR, and quick and easy preparation. Furthermore, the material used as modifier has a simple synthesis and its amount utilized is very small, thus illustrating the economic feasibility of this sensor.",
"title": ""
},
{
"docid": "ebcff53d86162e30c43b58ae03e786a0",
"text": "The adjustment of probabilistic models for sentiment analysis to changes in language use and the perception of products can be realized via incremental learning techniques. We provide a free, open and GUI-based sentiment analysis tool that allows for a) relabeling predictions and/or adding labeled instances to retrain the weights of a given model, and b) customizing lexical resources to account for false positives and false negatives in sentiment lexicons. Our results show that incrementally updating a model with information from new and labeled instances can substantially increase accuracy. The provided solution can be particularly helpful for gradually refining or enhancing models in an easily accessible fashion while avoiding a) the costs for training a new model from scratch and b) the deterioration of prediction accuracy over time.",
"title": ""
}
] |
scidocsrr
|
3613517f5463662c101b0f2f04df9c18
|
A survey on abstractive summarization techniques
|
[
{
"docid": "6d5429ddf4050724432da73af60274d6",
"text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.",
"title": ""
},
{
"docid": "8123ab525ce663e44b104db2cacd59a9",
"text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.",
"title": ""
},
{
"docid": "00a0ab98af151a80fe7b51d6277cb996",
"text": "Meaning Representation for Sembanking",
"title": ""
}
] |
[
{
"docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c",
"text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.",
"title": ""
},
{
"docid": "03cea891c4a9fdc77832979267f9dca9",
"text": "Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g. a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs),and process saving may be obtained through the use of the CATCH operator. The use of CATCH, in particular, allows an elegant treatment of process saving.\n We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design.",
"title": ""
},
{
"docid": "33b405dbbe291f6ba004fa6192501861",
"text": "A quasi-static analysis of an open-ended coaxial line terminated by a semi-infinite medium on ground plane is presented in this paper. The analysis is based on a vtiriation formulation of the problem. A comparison of results obtained by this method with the experimental and the other theoretical approaches shows an excellent agreement. This analysis is expected to be helpful in the inverse problem of calculating the pertnittivity of materials in oico for a given iuput impedance of the coaxial line.",
"title": ""
},
{
"docid": "0a981845153607465efb91acec05e9d0",
"text": "The performance of memory-bound commercial applicationssuch as databases is limited by increasing memory latencies. Inthis paper, we show that exploiting memory-level parallelism(MLP) is an effective approach for improving the performance ofthese applications and that microarchitecture has a profound impacton achievable MLP. Using the epoch model of MLP, we reasonhow traditional microarchitecture features such as out-of-orderissue and state-of-the-art microarchitecture techniques suchas runahead execution affect MLP. Simulation results show that amoderately aggressive out-of-order issue processor improvesMLP over an in-order issue processor by 12-30%, and that aggressivehandling of loads, branches and serializing instructionsis needed to attain the full benefits of large out-of-order instructionwindows. The results also show that a processor's issue windowand reorder buffer should be decoupled to exploit MLP more efficiently.In addition, we demonstrate that runahead execution ishighly effective in enhancing MLP, potentially improving the MLPof the database workload by 82% and its overall performance by60%. Finally, our limit study shows that there is considerableheadroom in improving MLP and overall performance by implementingeffective instruction prefetching, more accurate branchprediction and better value prediction in addition to runahead execution.",
"title": ""
},
{
"docid": "0b883076d3b6f114c3a921deb73a370e",
"text": "The Soliloquy primitive, first proposed by the third author in 2007, is based on cyclic lattices. It has very good efficiency properties, both in terms of public key size and the speed of encryption and decryption. There are straightforward techniques for turning Soliloquy into a key exchange or other public-key protocols. Despite these properties, we abandoned research on Soliloquy after developing (2010 to 2013) a reasonably efficient quantum attack on the primitive. A similar quantum algorithm has been recently published in some highly insightful independent work by Eisenträger, Hallgren, Kitaev, and Song [2]. However, their paper concentrates on computing unit groups of arbitrary degree number fields whereas we will show how to apply the approach to the special case of Soliloquy.",
"title": ""
},
{
"docid": "310aa30e2dd2b71c09780f7984a3663c",
"text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.",
"title": ""
},
{
"docid": "91130561751b96803ce9d53c7dfc2fa9",
"text": "In this paper, we present a research oriented open challenge focusing on multimodal gesture spotting and recognition from continuous sequences in the context of close human-computer interaction. We contextually outline the added value of the proposed challenge by presenting most recent and popular challenges and corpora available in the field. Then we present the procedures for data collection, corpus creation and the tools that have been developed for participants. Finally we introduce a novel single performance metric that has been developed to quantitatively evaluate the spotting and recognition task with multiple sensors.",
"title": ""
},
{
"docid": "cf8a16614541ee06ce6849d64f3f327f",
"text": "Distributed Denial of Service attacks (DDoS) have remained as one of the most destructive attacks in the Internet for over two decades. Despite tremendous efforts on the design of DDoS defense strategies, few of them have been considered for widespread deployment due to strong design assumptions on the Internet infrastructure, prohibitive operational costs and complexity. Recently, the emergence of Software Defined Networking (SDN) has offered a solution to reduce network management complexity. It is also believed to facilitate security management thanks to its programmability. To explore the advantages of using SDN to mitigate DDoS attacks, we propose a distributed collaborative framework that allows the customers to request DDoS mitigation service from ISPs. Upon request, ISPs can change the label of the anomalous traffic and redirect them to security middleboxes, while attack detection and analysis modules are deployed at customer side, avoiding privacy leakage and other legal concerns. Our preliminary analysis demonstrates that SDN has promising potential to enable autonomic mitigation of DDoS attacks, as well as other large-scale attacks.",
"title": ""
},
{
"docid": "5ceb6e39c8f826c0a7fd0e5086090a5f",
"text": "Mobile botnet phenomenon is gaining popularity among malware writers in order to exploit vulnerabilities in smartphones. In particular, mobile botnets enable illegal access to a victim’s smartphone, can compromise critical user data and launch a DDoS attack through Command and Control (C&C). In this article, we propose a static analysis approach, DeDroid, to investigate botnet-specific properties that can be used to detect mobile applications with botnet intensions. Initially, we identify critical features by observing code behavior of the few known malware binaries having C&C features. Then, we compare the identified features with the malicious and benign applications of Drebin dataset. The results show against the comparative analysis that, Drebin dataset has 35% malicious applications which qualify as botnets. Upon closer examination, 90% of the potential botnets are confirmed as botnets. Similarly, for comparative analysis against benign applications having C&C features, DeDroid has achieved adequate detection accuracy. In addition, DeDroid has achieved high accuracy with negligible false positive rate while making decision for state-of-the-art malicious applications.",
"title": ""
},
{
"docid": "1f0558c43a8cfc3f2c801f6625fc9cbf",
"text": "This work presents a flexible system to reconstruct 3D models of objects captured with an RGB-D sensor. A major advantage of the method is that unlike other modelling tools, our reconstruction pipeline allows the user to acquire a full 3D model of the object. This is achieved by acquiring several partial 3D models in different sessions-each individual session presenting the object of interest in different configurations that reveal occluded parts of the object - that are automatically merged together to reconstruct a full 3D model. In addition, the 3D models acquired by our system can be directly used by state-of-the-art object instance recognition and object tracking modules, providing object-perception capabilities to complex applications requiring these functionalities (e.g. human-object interaction analysis, robot grasping, etc.). The system does not impose constraints in the appearance of objects (textured, untextured) nor in the modelling setup (moving camera with static object or turn-table setups with static camera). The proposed reconstruction system has been used to model a large number of objects resulting in metrically accurate and visually appealing 3D models.",
"title": ""
},
{
"docid": "46818c0cd0d3b072d64113cf4b7b7e91",
"text": "We study the problem of distributed multitask learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i.e. when the predictor matrix has low rank. We consider a setting where each task is handled by a different machine, with samples for the task available locally on the machine, and study communication-efficient methods for exploiting the shared structure.",
"title": ""
},
{
"docid": "a48b3eb270755d34f6ec520be804dbb2",
"text": "In this paper, we describe our experiences and thoughts on building speech applications on mobile devices for developing countries. We describe three models of use for automatic speech recognition (ASR) systems on mobile devices that are currently used – embedded speech recognition, speech recognition in the cloud, and distributed speech recognition; evaluate their advantages and disadvantages; and finally propose a fourth model of use that we call Shared Speech Recognition with User-Based Adaptation. This proposed model exploits the advantages in all the three current models, while mitigating the challenges that make any of the current models less feasible, such as unreliable cellular connections or low processing power on mobile devices, which are typical needs of speech application in developing regions. We also propose open questions for future research to further evaluate our proposed model of use. Finally, we demonstrate the performance of two mobile speech recognizers that are either used in a lab setting to compare the recognition accuracy against a desktop, or used in real-world speech applications for mobile devices in the developing world.",
"title": ""
},
{
"docid": "811080d1bf24f041792d6895791242bb",
"text": "We survey the use of weighted nite state transducers WFSTs in speech recognition We show that WFSTs provide a common and natural rep resentation for HMM models context dependency pronunciation dictio naries grammars and alternative recognition outputs Furthermore gen eral transducer operations combine these representations exibly and e ciently Weighted determinization and minimization algorithms optimize their time and space requirements and a weight pushing algorithm dis tributes the weights along the paths of a weighted transducer optimally for speech recognition As an example we describe a North American Business News NAB recognition system built using these techniques that combines the HMMs full cross word triphones a lexicon of forty thousand words and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real time on a very simple decoder In another example we show that the same techniques can be used to optimize lattices for second pass recognition In a third example we show how general automata operations can be used to assemble lattices from di erent recognizers to improve recognition performance Introduction Much of current large vocabulary speech recognition is based on models such as HMMs tree lexicons or n gram language models that can be represented by weighted nite state transducers Even when richer models are used for instance context free grammars for spoken dialog applications they are often restricted for e ciency reasons to regular subsets either by design or by approximation Pereira and Wright Nederhof Mohri and Nederhof M Mohri Weighted FSTs in Speech Recognition A nite state transducer is a nite automaton whose state transitions are labeled with both input and output symbols Therefore a path through the transducer encodes a mapping from an input symbol sequence to an output symbol sequence A weighted transducer puts weights on transitions in addition to the input and output symbols Weights may encode probabilities durations penalties or any other quantity that accumulates along paths to compute the overall weight of mapping an input sequence to an output sequence Weighted transducers are thus a natural choice to represent the probabilistic nite state models prevalent in speech processing We present a survey of the recent work done on the use of weighted nite state transducers WFSTs in speech recognition Mohri et al Pereira and Riley Mohri Mohri et al Mohri and Riley Mohri et al Mohri and Riley We show that common methods for combin ing and optimizing probabilistic models in speech processing can be generalized and e ciently implemented by translation to mathematically well de ned op erations on weighted transducers Furthermore new optimization opportunities arise from viewing all symbolic levels of ASR modeling as weighted transducers Thus weighted nite state transducers de ne a common framework with shared algorithms for the representation and use of the models in speech recognition that has important algorithmic and software engineering bene ts We start by introducing the main de nitions and notation for weighted nite state acceptors and transducers used in this work We then present introductory speech related examples and describe the most important weighted transducer operations relevant to speech applications Finally we give examples of the ap plication of transducer representations and operations on transducers to large vocabulary speech recognition with results that meet certain optimality criteria Weighted Finite State Transducer De nitions and Al gorithms The de nitions that follow are based on the general algebraic notion of semiring Kuich and Salomaa The semiring abstraction permits the de nition of automata representations and algorithms over a broad class of weight sets and algebraic operations A semiring K consists of a set K equipped with an associative and com mutative operation and an associative operation with identities and respectively such that distributes over and a a In other words a semiring is similar to the more familiar ring algebraic structure such as the ring of polynomials over the reals except that the additive operation may not have an inverse For example N is a semiring The weights used in speech recognition often represent probabilities the cor responding semiring is then the probability semiring R For numerical stability implementations may replace probabilities with log probabilities The appropriate semiring is then the image by log of the semiring R M Mohri Weighted FSTs in Speech Recognition and is called the log semiring When using log probabilities with a Viterbi best path approximation the appropriate semiring is the tropical semiring R f g min In the following de nitions we assume an arbitrary semiring K We will give examples with di erent semirings to illustrate the variety of useful computations that can be carried out in this framework by a judicious choice of semiring",
"title": ""
},
{
"docid": "dc8470db1f522e185c3192bf8564220d",
"text": "Hematocolpos is rarely presented as a pelvic mass which mechanically compresses the bladder and the urethra thereby causing urinary retention. A 12-year-old girl referred with the history of lower abdominal pain and retention of urine for 24 h. The patient had not started her menses yet. Three weeks before she also complained of discomfort on passing urine, frequency and urgency and was taken to a local outpatient clinic where she was given antibiotics with the diagnosis of urinary tract infection, she had also the history of intermittent urinary catheterization (three times before) in an emergency department because of acute severe urinary retention. Transabdominal ultrasonography revealed a pelvic semi-solid mass suggestive of hematocolpos. Pelvic examination revealed a pale blue imperforate hymen bulging from the vaginal introitus outwards. A cruciate incision was made over the hymen. Postoperative period was uneventful. In case of acute severe urinary retention in an adolescent girl, the clinicians should keep in mind that imperforate hymen may be a causative factor and this condition may easily be treated surgically.",
"title": ""
},
{
"docid": "a3b78cb1e9f0d918f29c9cfbb7db1f6f",
"text": "We propose a novel approach for helping content transcription of handwritten digital documents. The approach adopts a segmentation based keyword retrieval approach that follows query-by-string paradigm and exploits the user validation of the retrieved words to improve its performance during operation. Our approach starts with an initial training set, which contains only a few pages and a tentative list of words supposedly in the document, and iteratively interleaves a word retrieval step by the system with a validation step by the user. After each iteration, the system exploits the results of the validation to update its internal model, so as to use that evidence in further iterations of the search. Experimental results on the Bentham dataset show that the system may start with a few word images and their transcripts, exhibits an improvement of the performance during operation, and after a few iterations is able to correctly transcribe more than 68% of the word of the list.",
"title": ""
},
{
"docid": "e0583afbdc609792ad947223006c851f",
"text": "Orthogonal frequency-division multiplexing (OFDM) signal coding and system architecture were implemented to achieve radar and data communication functionalities. The resultant system is a software-defined unit, which can be used for range measurements, radar imaging, and data communications. Range reconstructions were performed for ranges up to 4 m using trihedral corner reflectors with approximately 203 m of radar cross section at the carrier frequency; range resolution of approximately 0.3 m was demonstrated. Synthetic aperture radar (SAR) image of a single corner reflector was obtained; SAR signal processing specific to OFDM signals is presented. Data communication tests were performed in radar setup, where the signal was reflected by the same target and decoded as communication data; bit error rate of was achieved at 57 Mb/s. The system shows good promise as a multifunctional software-defined sensor which can be used in radar sensor networks.",
"title": ""
},
{
"docid": "f3e219c14f495762a2a6ced94708a477",
"text": "We present novel empirical observations regarding how stochastic gradient descent (SGD) navigates the loss landscape of over-parametrized deep neural networks (DNNs). These observations expose the qualitatively different roles of learning rate and batch-size in DNN optimization and generalization. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive iterations and tracking various metrics during training. We find that the loss interpolation between parameters before and after each training iteration’s update is roughly convex with a minimum (valley floor) in between for most of the training. Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor. This ’bouncing between walls at a height’ mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the valley floor has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.",
"title": ""
},
{
"docid": "fb9669d1f3e43d69d5893a9b2d15957f",
"text": "Researchers in the Digital Humanities and journalists need to monitor, collect and analyze fresh online content regarding current events such as the Ebola outbreak or the Ukraine crisis on demand. However, existing focused crawling approaches only consider topical aspects while ignoring temporal aspects and therefore cannot achieve thematically coherent and fresh Web collections. Especially Social Media provide a rich source of fresh content, which is not used by state-of-the-art focused crawlers. In this paper we address the issues of enabling the collection of fresh and relevant Web and Social Web content for a topic of interest through seamless integration of Web and Social Media in a novel integrated focused crawler. The crawler collects Web and Social Media content in a single system and exploits the stream of fresh Social Media content for guiding the crawler.",
"title": ""
},
{
"docid": "fb3bfb456edd14c8ed27d6b532f8f226",
"text": "Despite great successes in many fields, machine learning typically requires substantial human resources to determine a good machine learning pipeline (including various types of preprocessing, and the choice of classifiers and hyperparameters). AutoML aims to free human practitioners and researchers from these menial tasks. The current state-of-the-art in AutoML has been evaluated in the AutoML challenge 2018. Here, we describe our winning entry to this challenge, dubbed PoSH Auto-sklearn, which combines an automatically preselected portfolio, ensemble building and Bayesian optimization with successive halving. Finally, we share insights in the importance of different parts of our approach.",
"title": ""
},
{
"docid": "3c82ba94aa4d717d51c99cfceb527f22",
"text": "Manipulator collision avoidance using genetic algorithms is presented. Control gains in the collision avoidance control model are selected based on genetic algorithms. A repulsive force is artificially created using the distances between the robot links and obstacles, which are generated by a distance computation algorithm. Real-time manipulator collision avoidance control has achieved. A repulsive force gain is introduced through the approaches for definition of link coordinate frames and kinematics computations. The safety distance between objects is affected by the repulsive force gain. This makes the safety zone adjustable and provides greater intelligence for robotic tasks under the ever-changing environment.",
"title": ""
}
] |
scidocsrr
|
426067b1710c5304ec93b0be6ba500b9
|
Combatting Online Fraud in Saudi Arabia Using General Deterrence Theory (GDT)
|
[
{
"docid": "2ae29a786061fc24f23f2583ddf87beb",
"text": "K E Y W O R D S : method, qualitative analysis, text interpretation, textual data The need for tools in qualitative analysis Qualitative methods have enjoyed a growing popularity in the past decade throughout the social sciences (Bryman and Burgess, 1994; Denzin, 1994; Jensen, 1991; Marshall and Rossman, 1999; Morse, 1994). No longer relegated to the marginalia of exploratory stages, or derided as anecdotal, qualitative methods have been gaining recognition in domains traditionally inclined to more positivistic methods (Barnes et al., 1999; Black, 1996; Ritchie and Spencer, 1994). Indeed, literature espousing, promoting and employing this method of research is rapidly increasing – a move that is being welcomed as a positive step towards a deeper understanding of social phenomena and their dynamics. However, while the issues of when, why and how to employ qualitative Thematic networks: an analytic tool for qualitative research J E N N I F E R A T T R I D E S T I R L I N G Commission for Health Improvement, England Qualitative Research Copyright © SAGE Publications (London, Thousand Oaks,CA and New Delhi) vol. (): -. [- () :; -; ] 385 Q R A RT I C L E",
"title": ""
}
] |
[
{
"docid": "43e90cd84394bd686303e07b3048e3ac",
"text": "A harlequin fetus seen at birth was treated with etretinate and more general measures, including careful attention to fluid balance, calorie intake and temperature control. She improved, continued to develop, and had survived to 5 months at the time of this report.",
"title": ""
},
{
"docid": "6217381e6ab41a3223537c3707158595",
"text": "When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present COMMONSENSEQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from CONCEPTNET (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts. This encourages workers to create questions with complex semantics that often require prior knowledge. We create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy, well below human performance, which is 89%.",
"title": ""
},
{
"docid": "74f017db6e98b068b29698886caec368",
"text": "Social networks have become an additional marketing channel that could be integrated with the traditional ones as a part of the marketing mix. The change in the dynamics of the marketing interchange between companies and consumers as introduced by social networks has placed a focus on the non-transactional customer behavior. In this new marketing era, the terms engagement and participation became the central non-transactional constructs, used to describe the nature of participants’ specific interactions and/or interactive experiences. These changes imposed challenges to the traditional one-way marketing, resulting in companies experimenting with many different approaches, thus shaping a successful social media approach based on the trial-and-error experiences. To provide insights to practitioners willing to utilize social networks for marketing purposes, our study analyzes the influencing factors in terms of characteristics of the content communicated by the company, such as media type, content type, posting day and time, over the level of online customer engagement measured by number of likes, comments and shares, and interaction duration for the domain of a Facebook brand page. Our results show that there is a different effect of the analyzed factors over individual engagement measures. We discuss the implications of our findings for social media marketing.",
"title": ""
},
{
"docid": "22bf615d77bbd04a3d62476f64d01c6a",
"text": "We present the design, implementation, and evaluation of Direct File System (DFS) for virtualized flash storage. Instead of using traditional layers of abstraction, our layers of abstraction are designed for directly accessing flash memory devices. DFS has two main novel features. First, it lays out its files directly in a very large virtual storage address space provided by FusionIO's virtual flash storage layer. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, DFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Our microbenchmark results show that DFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on FusionIO's ioDrive. For direct access performance, DFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, DFS is also consistently better than ext3, and sometimes by over 149%. Our application benchmarks show that DFS outperforms ext3 by 7% to 250% while requiring less CPU power.",
"title": ""
},
{
"docid": "3567af18bc17efdb0efeb41d08fabb7b",
"text": "In this review we examine recent research in the area of motivation in mathematics education and discuss findings from research perspectives in this domain. We note consistencies across research perspectives that suggest a set of generalizable conclusions about the contextual factors, cognitive processes, and benefits of interventions that affect students’ and teachers’ motivational attitudes. Criticisms are leveled concerning the lack of theoretical guidance driving the conduct and interpretation of the majority of studies in the field. Few researchers have attempted to extend current theories of motivation in ways that are consistent with the current research on learning and classroom discourse. In particular, researchers interested in studying motivation in the content domain of school mathematics need to examine the relationship that exists between mathematics as a socially constructed field and students’ desire to achieve.",
"title": ""
},
{
"docid": "44c65e6d783e646034b60c99f8958250",
"text": "Extreme learning machine (ELM) randomly generates parameters of hidden nodes and then analytically determines the output weights with fast learning speed. The ill-posed problem of parameter matrix of hidden nodes directly causes unstable performance, and the automatical selection problem of the hidden nodes is critical to holding the high efficiency of ELM. Focusing on the ill-posed problem and the automatical selection problem of the hidden nodes, this paper proposes the variational Bayesian extreme learning machine (VBELM). First, the Bayesian probabilistic model is involved into ELM, where the Bayesian prior distribution can avoid the ill-posed problem of hidden node matrix. Then, the variational approximation inference is employed in the Bayesian model to compute the posterior distribution and the independent variational hyperparameters approximately, which can be used to select the hidden nodes automatically. Theoretical analysis and experimental results elucidate that VBELM has stabler performance with more compact architectures, which presents probabilistic predictions comparison with traditional point predictions, and it also provides the hyperparameter criterion for hidden node selection.",
"title": ""
},
{
"docid": "64b13ae694ec4c16cdbd59ceecec0915",
"text": "Determining the stance expressed by an author from a post written for a twosided debate in an online debate forum is a relatively new problem. We seek to improve Anand et al.’s (2011) approach to debate stance classification by modeling two types of soft extra-linguistic constraints on the stance labels of debate posts, user-interaction constraints and ideology constraints. Experimental results on four datasets demonstrate the effectiveness of these inter-post constraints in improving debate stance classification.",
"title": ""
},
{
"docid": "1489207c35a613d38a4f9c06816604f0",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "c2f9929212b9b941f338f6f1ac5311a9",
"text": "Recent results show that deep neural networks achieve excellent performance even when, during training, weights are quantized and projected to a binary representation. Here, we show that this is just the tip of the iceberg: these same networks, during testing, also exhibit a remarkable robustness to distortions beyond quantization, including additive and multiplicative noise, and a class of non-linear projections where binarization is just a special case. To quantify this robustness, we show that one such network achieves 11% test error on CIFAR-10 even with 0.68 effective bits per weight. Furthermore, we find that a common training heuristic— namely, projecting quantized weights during backpropagation—can be altered (or even removed) and networks still achieve a base level of robustness during testing. Specifically, training with weight projections other than quantization also works, as does simply clipping the weights, both of which have never been reported before. We confirm our results for CIFAR-10 and ImageNet datasets. Finally, drawing from these ideas, we propose a stochastic projection rule that leads to a new state of the art network with 7.64% test error on CIFAR-10 using no data augmentation.",
"title": ""
},
{
"docid": "711b3ed2cb9da33199dcc18f8b3fc98d",
"text": "In this paper, we propose two ways of improving image classification based on bag-of-words representation [25]. Two shortcomings of this representation are the loss of the spatial information of visual words and the presence of noisy visual words due to the coarseness of the vocabulary building process. On the one hand, we propose a new representation of images that goes further in the analogy with textual data: visual sentences, that allows us to \"read\" visual words in a certain order, as in the case of text. We can therefore consider simple spatial relations between words. We also present a new image classification scheme that exploits these relations. It is based on the use of language models, a very popular tool from speech and text analysis communities. On the other hand, we propose new techniques to eliminate useless words, one based on geometric properties of the keypoints, the other on the use of probabilistic Latent Semantic Analysis (pLSA). Experiments show that our techniques can significantly improve image classification, compared to a classical Support Vector Machine-based classification.",
"title": ""
},
{
"docid": "f698eb36fb75c6eae220cf02e41bdc44",
"text": "In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "e32068682c313637f97718e457914381",
"text": "Optimal load shedding is a very critical issue in power systems. It plays a vital role, especially in third world countries. A sudden increase in load can affect the important parameters of the power system like voltage, frequency and phase angle. This paper presents a case study of Pakistan’s power system, where the generated power, the load demand, frequency deviation and load shedding during a 24-hour period have been provided. An artificial neural network ensemble is aimed for optimal load shedding. The objective of this paper is to maintain power system frequency stability by shedding an accurate amount of load. Due to its fast convergence and improved generalization ability, the proposed algorithm helps to deal with load shedding in an efficient manner.",
"title": ""
},
{
"docid": "41a16f3eb3ff59d34e04ffa77bf1ae86",
"text": "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems.",
"title": ""
},
{
"docid": "25d4dab65c13696bff0195e3981b5c86",
"text": "In the proposed Irrigation system IoT is implemented, in this system all the information that are received from the sensors and the various parameters are given to the arduinouno microcontroller as an analog input. A preset value of soil moisture sensor is fixed in microcontroller and also for fencing. When it goes beyond the particular threshold value water is automatically irrigated to the crops and once the required amount of water is fulfilled it stops. The Microcontroller transmits that information on the internet through a network of IoT in the form of wifi module ESP8266 that is attached to it. This enhances automated irrigation as the water pump can be switched on or off through information given to the controller. This proposed Irrigation system is used to get the chlorophyll content and nitrogen content of the leaf using LDR and Laser. This approach is for the advancement of irrigation process by automatic method without manpower by measuring various parameters related to the field and thus improves irrigation.",
"title": ""
},
{
"docid": "e17c5945d67c504725e9027c6aa6d4e7",
"text": "A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current deficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexityand threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy.",
"title": ""
},
{
"docid": "d6628b102e8f87e8ce58c2e3483a7beb",
"text": "Nowadays, Big Data platforms allow the analysis of massive data streams in an efficient way. However, the services they provide are often too raw, thus the implementation of advanced real-world applications requires a non-negligible effort for interfacing with such services. This also complicates the task of choosing which one of the many available alternatives is the most appropriate for the application at hand. In this paper, we present a comparative study of the three major opensource Big Data platforms for stream processing, as performed by using our novel RAMS framework. Although the results we present are specific for our use case (recognition of suspect people from massive video streams), the generality of the RAMS framework allows both considering such results as valid for similar applications and implementing different use cases on top of Big Data platforms with very limited effort.",
"title": ""
},
{
"docid": "da54e46adb991e66a7896f5089e3326e",
"text": "OBJECTIVE\nThis exploratory study reports on maternity clinicians' perceptions of transfer of their responsibility and accountability for patients in relation to clinical handover with particular focus transfers of care in birth suite.\n\n\nDESIGN\nA qualitative study of semistructured interviews and focus groups of maternity clinicians was undertaken in 2007. De-indentified data were transcribed and coded using the constant comparative method. Multiple themes emerged but only those related to responsibility and accountability are reported in this paper.\n\n\nSETTING\nOne tertiary Australian maternity hospital.\n\n\nPARTICIPANTS\nMaternity care midwives, nurses (neonatal, mental health, bed managers) and doctors (obstetric, neontatology, anaesthetics, internal medicine, psychiatry).\n\n\nPRIMARY OUTCOME MEASURES\nPrimary outcome measures were the perceptions of clinicians of maternity clinical handover.\n\n\nRESULTS\nThe majority of participants did not automatically connect maternity handover with the transfer of responsibility and accountability. Once introduced to this concept, they agreed that it was one of the roles of clinical handover. They spoke of complete transfer, shared and ongoing responsibility and accountability. When clinicians had direct involvement or extensive clinical knowledge of the patient, blurring of transition of responsibility and accountability sometimes occurred. A lack of 'ownership' of a patient and their problems were seen to result in confusion about who was to address the clinical issues of the patient. Personal choice of ongoing responsibility and accountability past the handover communication were described. This enabled the off-going person to rectify an inadequate handover or assist in an emergency when duty clinicians were unavailable.\n\n\nCONCLUSIONS\nThere is a clear lack of consensus about the transition of responsibility and accountability-this should be explicit at the handover. It is important that on each shift and new workplace environment clinicians agree upon primary role definitions, responsibilities and accountabilities for patients. To provide system resilience, secondary responsibilities may be allocated as required.",
"title": ""
},
{
"docid": "f6b8ad20e1afd5d8aa63b16042d59f99",
"text": "In the domain of sequence modelling, Recurrent Neural Networks (RNN) have been capable of achieving impressive results in a variety of application areas including visual question answering, part-of-speech tagging and machine translation. However this success in modelling short term dependencies has not successfully transitioned to application areas such as trajectory prediction, which require capturing both short term and long term relationships. In this paper, we propose a Tree Memory Network (TMN) for modelling long term and short term relationships in sequence-to-sequence mapping problems. The proposed network architecture is composed of an input module, controller and a memory module. In contrast to related literature, which models the memory as a sequence of historical states, we model the memory as a recursive tree structure. This structure more effectively captures temporal dependencies across both short term and long term sequences using its hierarchical structure. We demonstrate the effectiveness and flexibility of the proposed TMN in two practical problems, aircraft trajectory modelling and pedestrian trajectory modelling in a surveillance setting, and in both cases we outperform the current state-of-the-art. Furthermore, we perform an in depth analysis on the evolution of the memory module content over time and provide visual evidence on how the proposed TMN is able to map both long term and short term relationships efficiently via a hierarchi1 ar X iv :1 70 3. 04 70 6v 1 [ cs .L G ] 1 2 M ar 2 01 7",
"title": ""
},
{
"docid": "dc4e9b951f83843b17c620a4b766282d",
"text": "Security threats have been a major concern as a result of emergence of technology in every aspect including internet market, computational and communication technologies. To solve this issue effective mechanism of “cryptography” is used to ensure integrity, privacy, availability, authentication, computability, identification and accuracy. Cryptology techniques like PKC and SKC are used of data recovery. In current work, we describe exploration of efficient approach of private key architecture on the basis of attributes: effectiveness, scalability, flexibility, reliability and degree of security issues essential for safe wired and wireless communication. The work explores efficient private key algorithm based on security of individual system and scalability under criteria of memory-cpu utilization together with encryption performance. The exploration results in AES as superior over other algorithm. The work opens a new direction over cloud security and internet of things.",
"title": ""
},
{
"docid": "bac88254869f9b83aaf539b775d9ec66",
"text": "The medicinal herb feverfew [Tanacetum parthenium (L.) Schultz-Bip.] has long been used as a folk remedy for the treatment of migraine and arthritis. Parthenolide, a sesquiterpene lactone, is considered to be the primary bioactive compound in feverfew having anti-migraine, anti-tumor, and anti-inflammatory properties. In this study we determined, through in vitro bioassays, the inhibitory activity of parthenolide and golden feverfew extract against two human breast cancer cell lines (Hs605T and MCF-7) and one human cervical cancer cell line (SiHa). Feverfew ethanolic extract inhibited the growth of all three types of cancer cells with a half-effective concentration (EC50) of 1.5 mg/mL against Hs605T, 2.1 mg/mL against MCF-7, and 0.6 mg/mL against SiHa. Among the tested constituents of feverfew (i.e., parthenolide, camphor, luteolin, and apigenin), parthenolide showed the highest inhibitory effect with an EC50 against Hs605T, MCF-7, and SiHa of 2.6 microg/mL, 2.8 microg/mL, and 2.7 microg/mL, respectively. Interactions between parthenolide and flavonoids (apigenin and luteolin) in feverfew extract also were investigated to elucidate possible synergistic or antagonistic effects. The results revealed that apigenin and luteolin might have moderate to weak synergistic effects with parthenolide on the inhibition of cancer cell growth of Hs605T, MCF-7, and SiHa.",
"title": ""
}
] |
scidocsrr
|
a837cb33838f875e8f96f2800e7dd4c4
|
Robust control of underactuated Aerial Manipulators via IDA-PBC
|
[
{
"docid": "1509a06ce0b2395466fe462b1c3bd333",
"text": "This paper addresses mechanics, design, estimation and control for aerial grasping. We present the design of several light-weight, low-complexity grippers that allow quadrotors to grasp and perch on branches or beams and pick up and transport payloads. We then show how the robot can use rigid body dynamic models and sensing to verify a grasp, to estimate the the inertial parameters of the grasped object, and to adapt the controller and improve performance during flight. We present experimental results with different grippers and different payloads and show the robot's ability to estimate the mass, the location of the center of mass and the moments of inertia to improve tracking performance.",
"title": ""
}
] |
[
{
"docid": "c8f6eac662b30768b2e64b3bd3502e73",
"text": "This paper discusses the use of genetic programming (GP) and genetic algorithms (GA) to evolve solutions to a problem in robot control. GP is seen as an intuitive evolutionary method while GAs require an extra layer of human intervention. The infrastructures for the different evolutionary approaches are compared.",
"title": ""
},
{
"docid": "a20b684deeb401855cbdc12cab90610a",
"text": "A zero knowledge interactive proof system allows one person to convince another person of some fact without revealing the information about the proof. In particular, it does not enable the verifier to later convince anyone else that the prover has a proof of the theorem or even merely that the theorem is true (much less that he himself has a proof). This paper reviews the field of zero knowledge proof systems giving a brief overview of zero knowledge proof systems and the state of current research in this field.",
"title": ""
},
{
"docid": "fb620cb18ffe65b78c338ce4ee8414ba",
"text": "The impact of digital image processing is increasing by the day for its use in the medical and research areas. Medical image classification scheme has been on the increase in order to help physicians and medical practitioners in their evaluation and analysis of diseases. Several classification schemes such as Artificial Neural Network (ANN), Bayes Classification, Support Vector Machine (SVM) and K-Means Nearest Neighbor have been used. In this paper, we evaluate and compared the performance of SVM and PCA by analyzing diseased image of the brain (Alzheimer) and normal (MRI) brain. The results show that Principal Components Analysis outperforms the Support Vector Machine in terms of training time and recognition time.",
"title": ""
},
{
"docid": "6b19d08c9aa6ecfec27452a298353e1f",
"text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.",
"title": ""
},
{
"docid": "5100ef5ffa501eb7193510179039cd82",
"text": "The interplay between caching and HTTP Adaptive Streaming (HAS) is known to be intricate, and possibly detrimental to QoE. In this paper, we make the case for caching-aware rate decision algorithms at the client side which do not require any collaboration with cache or server. To this goal, we introduce the optimization model which allows to compute the optimal rate decisions in the presence of cache, and compare the current main representatives of HAS algorithms (RBA and BBA) to this optimal. This allows us to assess how far from the optimal these versions are, and on which to build a caching-aware rate decision algorithm.",
"title": ""
},
{
"docid": "91d59b5e08c711e25d83785c198d9ae1",
"text": "The increase in the wireless users has led to the spectrum shortage problem. Federal Communication Commission (FCC) showed that licensed spectrum bands are underutilized, specially TV bands. The IEEE 802.22 standard was proposed to exploit these white spaces in the (TV) frequency spectrum. Cognitive Radio allows unlicensed users to use licensed bands while safeguarding the priority of licensed users. Cognitive Radio is composed of two types of users, licensed users also known as Primary Users(PUs) and unlicensed users also known as Secondary Users(SUs).SUs use the resources when spectrum allocated to PU is vacant, as soon as PU become active, the SU has to leave the channel for PU. Hence the opportunistic access is provided by CR to SUs whenever the channel is vacant. Cognitive Users sense the spectrum continuously and share this sensing information to other SUs, during this spectrum sensing, the network is vulnerable to so many attacks. One of these attacks is Primary User Emulation Attack (PUEA), in which the malicious secondary users can mimic the characteristics of primary users thereby causing legitimate SUs to erroneously identify the attacker as a primary user, and to gain access to wireless channels. PUEA is of two types: Selfish and Malicious attacker. A selfish attacker aims in stealing Bandwidth form legitimate SUs for its own transmissions while malicious attacker mimic the characteristics of PU.",
"title": ""
},
{
"docid": "00c08c490ea03030d95a79c07a257608",
"text": "Knowledge about a product’s willingness-to-pay on behalf of its (potential) customers plays a crucial role in many areas of marketing management like pricing decisions or new product development. Numerous approaches to measure willingness-to-pay with differential conceptual foundations and methodological implications have been presented in the relevant literature so far. This article provides the reader with a systematic overview of the relevant literature on these competing approaches and associated schools of thought, recognizes their respective merits and discusses obstacles and issues regarding their adoption to measuring willingness-to-pay. Because of its practical relevance, special focus will be put on indirect surveying techniques and, in particular, conjoint-based applications will be discussed in more detail. The strengths and limitations of the individual approaches are discussed and evaluated from a managerial point of view.",
"title": ""
},
{
"docid": "90ecdad8743f134fb07489cee9ce15ef",
"text": "As one of the most successful fast food chain in the world, throughout the development of McDonald’s, we could easily identify many successful business strategy implementations. In this paper, I will discuss some critical business strategies, which linked to the company’s structure and external environment. This paper is organized as follows: In the first section, I will give brief introduction to the success of McDonald’s. In the second section, I will analyze some particular strategies used by McDonald’s and how these strategies are suitable to their business structure. I will then analyze why McDonald’s choose these strategies in response to the changing external environment. Finally, I will summarize the approaches used by McDonald’s to achieve their strategic goals.",
"title": ""
},
{
"docid": "aac41bca030aecec0c8cc3cfaaf02a9e",
"text": "This paper started with the review of the history of technology acceptance model from TRA to UTAUT. The expected contribution is to bring to lime light the current development stage of the technology acceptance model. Based on this, the paper examined the impact of UTAUT model on ICT acceptance and usage in HEIs. The UTAUT model theory was verified using regressions analysis to understand the behavioral intention of the ADSU academic staffs’ acceptance and use of ICT in their workplace. The research objective is to measure the most influential factors for the acceptance and usage of ICT by ADSU academic staff and to identify the barriers. Two null hypotheses were stated: (1) the academic staff of ADSU rejects acceptance and usage of ICT in their workplace. (2) UTAUT does not predict the successful acceptance of ICT by the academic staff of the Adamawa State University. In summary, our findings shows that the four constructs of UTAUT have significant positive influence and impact on the behavioral intention to accept and use ICT by the ADSU academic staff. This shows that university academic staff will intend to use ICT that they believe will improve their job performance and are easy to use. The facilitating conditions such as appropriate hardware, software, training and support should be in place by the management. In the Adamawa State University, EE and SI are found to be the most influential predictors of academic staff acceptance of ICT and use among the four constructs of UTAUT. The greatest barriers are time and technical support for staff. Knowledge gained from the study is beneficial to both the university academic staff and the Nigerian ICT policy makers.",
"title": ""
},
{
"docid": "eb4f7427eb73ac0a0486e8ecb2172b52",
"text": "In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the forward additive Lucas-Kanade and the simultaneous inverse compositional algorithm through simulations. Under noisy conditions and photometric distortions our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the simultaneous inverse compositional algorithm but at a lower computational complexity.",
"title": ""
},
{
"docid": "af1dab317f2a5b45593a89d96a8061de",
"text": "Software engineering is forecast to be among the fastest growing employment field in the next decades. The purpose of this investigation is two-fold: Firstly, empirical studies on the personality types of software professionals are reviewed. Secondly, this work provides an upto-date personality profile of software engineers according to the Myers–Briggs Type Indicator. r 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ced57c0315603691bd2c185bcb83e6c5",
"text": "There has been a good amount of progress in sentiment analysis over the past 10 years, including the proposal of new methods and the creation of benchmark datasets. In some papers, however, there is a tendency to compare models only on one or two datasets, either because of time restraints or because the model is tailored to a specific task. Accordingly, it is hard to understand how well a certain model generalizes across different tasks and datasets. In this paper, we contribute to this situation by comparing several models on six different benchmarks, which belong to different domains and additionally have different levels of granularity (binary, 3-class, 4-class and 5-class). We show that BiLSTMs perform well across datasets and that both LSTMs and Bi-LSTMs are particularly good at fine-grained sentiment tasks (i. e., with more than two classes). Incorporating sentiment information into word embeddings during training gives good results for datasets that are lexically similar to the training data. With our experiments, we contribute to a better understanding of the performance of different model architectures on different data sets. Consequently, we detect novel state-of-the-art results on the SenTube datasets.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "cbeaacd304c0fcb1bce3decfb8e76e33",
"text": "One of the main problems with virtual reality as a learning tool is that there are hardly any theories or models upon which to found and justify the application development. This paper presents a model that defends the metaphorical design of educational virtual reality systems. The goal is to build virtual worlds capable of embodying the knowledge to be taught: the metaphorical structuring of abstract concepts looks for bodily forms of expression in order to make knowledge accessible to students. The description of a case study aimed at learning scientific categorization serves to explain and implement the process of metaphorical projection. Our proposals are based on Lakoff and Johnson's theory of cognition, which defends the conception of the embodied mind, according to which most of our knowledge relies on basic metaphors derived from our bodily experience.",
"title": ""
},
{
"docid": "9be80d8f93dd5edd72ecd759993935d6",
"text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].",
"title": ""
},
{
"docid": "43bfbebda8dcb788057e1c98b7fccea6",
"text": "Der Beitrag stellt mit Quasar Enterprise einen durchgängigen, serviceorientierten Ansatz zur Gestaltung großer Anwendungslandschaften vor. Er verwendet ein Architektur-Framework zur Strukturierung der methodischen Schritte und führt ein Domänenmodell zur Präzisierung der Begrifflichkeiten und Entwicklungsartefakte ein. Die dargestellten methodischen Bausteine und Richtlinien beruhen auf langjährigen Erfahrungen in der industriellen Softwareentwicklung. 1 Motivation und Hintergrund sd&m beschäftigt sich seit seiner Gründung vor 25 Jahren mit dem Bau von individuellen Anwendungssystemen. Als konsolidierte Grundlage der Arbeit in diesem Bereich wurde Quasar (Quality Software Architecture) entwickelt – die sd&m StandardArchitektur für betriebliche Informationssysteme [Si04]. Quasar dient sd&m als Referenz für seine Disziplin des Baus einzelner Anwendungen. Seit einigen Jahren beschäftigt sich sd&m im Auftrag seiner Kunden mehr und mehr mit Fragestellungen auf der Ebene ganzer Anwendungslandschaften. Das Spektrum reicht von IT-Beratung zur Unternehmensarchitektur, über die Systemintegration querschnittlicher technischer, aber auch dedizierter fachlicher COTS-Produkte bis hin zum Bau einzelner großer Anwendungssysteme auf eine Art und Weise, dass eine perfekte Passung in eine moderne Anwendungslandschaft gegeben ist. Zur Abdeckung dieses breiten Spektrums an Aufgaben wurde eine neue Disziplin zur Gestaltung von Anwendungslandschaften benötigt. sd&m entwickelte hierzu eine neue Referenz – Quasar Enterprise – ein Quasar auf Unternehmensebene.",
"title": ""
},
{
"docid": "0b6a766d3e23cd15ba748961a00a569b",
"text": "A novel soft strain sensor capable of withstanding strains of up to 100% is described. The sensor is made of a hyperelastic silicone elastomer that contains embedded microchannels filled with conductive liquids. This is an effort of improving the previously reported soft sensors that uses a single liquid conductor. The proposed sensor employs a hybrid approach involving two liquid conductors: an ionic solution and an eutectic gallium-indium alloy. This hybrid method reduces the sensitivity to noise that may be caused by variations in electrical resistance of the wire interface and undesired stress applied to signal routing areas. The bridge between these two liquids is made conductive by doping the elastomer locally with nickel nanoparticles. The design, fabrication, and characterization of the sensor are presented.",
"title": ""
},
{
"docid": "a550969fc708fa6d7898ea29c0cedef8",
"text": "This paper describes the findings of a research project whose main objective is to compile a character frequency list based on a very large collection of Chinese texts collected from various online sources. As compared with several previous studies on Chinese character frequencies, this project uses a much larger corpus that not only covers more subject fields but also contains a better proportion of informative versus imaginative Modern Chinese texts. In addition, this project also computes two bigram frequency lists that can be used for compiling a list of most frequently used two-character words in Chinese.",
"title": ""
},
{
"docid": "caf2fa85a302c289decab3a2a5b56566",
"text": "Cross-domain research topic mining can help users find relationships among related research domains and obtain a quick overview of these domains. This study investigates the evolution of crossdomain topics of three interdisciplinary research domains and uses a visual analytic approach to determine unique topics for each domain. This study also focuses on topic evolution over 10 years and on individual topics of cross domains. A hierarchical topic model is adopted to extract topics of three different domains and to correlate the extracted topics. A simple yet effective visualization interface is then designed, and certain interaction operations are provided to help users more deeply understand the visualization development trend and the correlation among the three domains. Finally, a case study is conducted to demonstrate the effectiveness of the proposed method.",
"title": ""
}
] |
scidocsrr
|
ce5d34160c148690b2f8e5c0c959645e
|
Algorithm Visualization: The State of the Field
|
[
{
"docid": "b43118e150870aab96af1a7b32515202",
"text": "Algorithm visualization (AV) technology graphically illustrates how algorithms work. Despite the intuitive appeal of the technology, it has failed to catch on in mainstream computer science education. Some have attributed this failure to the mixed results of experimental studies designed to substantiate AV technology’s educational effectiveness. However, while several integrative reviews of AV technology have appeared, none has focused specifically on the software’s effectiveness by analyzing this body of experimental studies as a whole. In order to better understand the effectiveness of AV technology, we present a systematic metastudy of 24 experimental studies. We pursue two separate analyses: an analysis of independent variables, in which we tie each study to a particular guiding learning theory in an attempt to determine which guiding theory has had the most predictive success; and an analysis of dependent variables, which enables us to determine which measurement techniques have been most sensitive to the learning benefits of AV technology. Our most significant finding is that how students use AV technology has a greater impact on effectiveness than what AV technology shows them. Based on our findings, we formulate an agenda for future research into AV effectiveness. A META-STUDY OF ALGORITHM VISUALIZATION EFFECTIVENESS 3",
"title": ""
}
] |
[
{
"docid": "7c2c987c2fc8ea0b18d8361072fa4e31",
"text": "Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated overengineering on IR, and not necessarily performance gain for QA. We propose to tightly integrate them by coupling automatically learned features for answer extraction to a shallow-structured IR model. Our method is very quick to implement, and significantly improves IR for QA (measured in Mean Average Precision and Mean Reciprocal Rank) by 10%-20% against an uncoupled retrieval baseline in both document and passage retrieval, which further leads to a downstream 20% improvement in QA F1.",
"title": ""
},
{
"docid": "a0c381d5dfa8b49ed2146ef8aef78335",
"text": "The goal of this research is to build a model to predict stock price movement using sentiments on social media. A new feature which captures topics and their sentiments simultaneously is introduced in the prediction model. In addition, a new topic model TSLDA is proposed to obtain this feature. Our method outperformed a model using only historical prices by about 6.07% in accuracy. Furthermore, when comparing to other sentiment analysis methods, the accuracy of our method was also better than LDA and JST based methods by 6.43% and 6.07%. The results show that incorporation of the sentiment information from social media can help to improve the stock prediction.",
"title": ""
},
{
"docid": "c5a225211a7240da086299e45bddf6e3",
"text": "This communication presents a technique to re-direct the radiation beam from a planar antenna in a specific direction with the inclusion of metamaterial loading. The beam-tilting approach described here uses the phenomenon based on phase change resulting from an EM wave entering a medium of different refractive index. The metamaterial H-shaped unit-cell structure is configured to provide a high refractive index which was used to implement beam tilting in a bow-tie antenna. The fabricated unit-cell was first characterized by measuring its S-parameters. Hence, a two dimensional array was constructed using the proposed unit-cell to create a region of high refractive index which was implemented in the vicinity bow-tie structure to realize beam-tilting. The simulation and experimental results show that the main beam of the antenna in the E-plane is tilted by 17 degrees with respect to the end-fire direction at 7.3, 7.5, and 7.7 GHz. Results also show unlike conventional beam-tilting antennas, no gain drop is observed when the beam is tilted; in fact there is a gain enhancement of 2.73 dB compared to the original bow-tie antenna at 7.5 GHz. The reflection-coeflicient of the antenna remains <; - 10 dB in the frequency range of operation.",
"title": ""
},
{
"docid": "32a2f92bffc2d616fb95830fa30ece24",
"text": "The huge number of points scanned from pipeline plants make the plant reconstruction very difficult. Traditional cylinder detection methods cannot be applied directly due to the high computational complexity. In this paper, we explore the structural characteristics of point cloud in pipeline plants and define a structure feature. Based on the structure feature, we propose a hierarchical structure detection and decomposition method that reduces the difficult pipeline-plant reconstruction problem in R3 into a set of simple circle detection problems in R2. Experiments with industrial applications are presented, which demonstrate the efficiency of the proposed structure detection method.",
"title": ""
},
{
"docid": "2461a83b1da812bfdce3a802a2fed972",
"text": "Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. SIGNSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative `1/`2 geometry of gradients, noise and curvature informs whether SIGNSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of SIGNSGD is able to match the accuracy and convergence speed of ADAM on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss (1823) we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD.",
"title": ""
},
{
"docid": "8d046c8468102edd57ba30d9d1992c55",
"text": "In this paper, we present a LinkNet-based architecture with SE-ResNeXt-50 encoder and a novel training strategy that strongly relies on image preprocessing and incorporating distorted network outputs. The architecture combines a pre-trained convolutional encoder and a symmetric expanding path that enables precise localization. We show that such a network can be trained on plain RGB images with a composite loss function and achieves competitive results on the DeepGlobe challenge on building extraction from satellite images",
"title": ""
},
{
"docid": "8015f5668df95f83e353550d54eac4da",
"text": "Counterfeit currency is a burning question throughout the world. The counterfeiters are becoming harder to track down because of their rapid adoption of and adaptation with highly advanced technology. One of the most effective methods to stop counterfeiting can be the widespread use of counterfeit detection tools/software that are easily available and are efficient in terms of cost, reliability and accuracy. This paper presents a core software system to build a robust automated counterfeit currency detection tool for Bangladeshi bank notes. The software detects fake currency by extracting existing features of banknotes such as micro-printing, optically variable ink (OVI), water-mark, iridescent ink, security thread and ultraviolet lines using OCR (Optical Character recognition), Contour Analysis, Face Recognition, Speeded UP Robust Features (SURF) and Canny Edge & Hough transformation algorithm of OpenCV. The success rate of this software can be measured in terms of accuracy and speed. This paper also focuses on the pros and cons of implementation details that may degrade the performance of image processing based paper currency authentication systems.",
"title": ""
},
{
"docid": "58b5c0628b2b964aa75d65a241f028d7",
"text": "This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.",
"title": ""
},
{
"docid": "72a01822f817e238812f9722629cf4dc",
"text": "Machine learning is increasingly used in high impact applications such as prediction of hospital re-admission, cancer screening or bio-medical research applications. As predictions become increasingly accurate, practitioners may be interested in identifying actionable changes to inputs in order to alter their class membership. For example, a doctor might want to know what changes to a patient’s status would predict him/her to not be re-admitted to the hospital soon. Szegedy et al. (2013b) demonstrated that identifying such changes can be very hard in image classification tasks. In fact, tiny, imperceptible changes can result in completely different predictions without any change to the true class label of the input. In this paper we ask the question if we can make small but meaningful changes in order to truly alter the class membership of images from a source class to a target class. To this end we propose deep manifold traversal, a method that learns the manifold of natural images and provides an effective mechanism to move images from one area (dominated by the source class) to another (dominated by the target class).The resulting algorithm is surprisingly effective and versatile. It allows unrestricted movements along the image manifold and only requires few images from source and target to identify meaningful changes. We demonstrate that the exact same procedure can be used to change an individual’s appearance of age, facial expressions or even recolor black and white images.",
"title": ""
},
{
"docid": "9e42fd0754365eb534b1887ba1002608",
"text": "Despite the success of existing works on single-turn conversation generation, taking the coherence in consideration, human conversing is actually a context-sensitive process. Inspired by the existing studies, this paper proposed the static and dynamic attention based approaches for context-sensitive generation of open-domain conversational responses. Experimental results on two public datasets show that the proposed static attention based approach outperforms all the baselines on automatic and human evaluation.",
"title": ""
},
{
"docid": "3877a5f89e36bca45660193c04ad170b",
"text": "Adversarial samples are perturbed inputs crafted to mislead the machine learning systems. A training mechanism, called adversarial training, which presents adversarial samples along with clean samples has been introduced to learn robust models. In order to scale adversarial training for large datasets, these perturbations can only be crafted using fast and simple methods (e.g., gradient ascent). However, it is shown that adversarial training converges to a degenerate minimum, where the model appears to be robust by generating weaker adversaries. As a result, the models are vulnerable to simple black-box attacks. In this paper we, (i) demonstrate the shortcomings of existing evaluation policy, (ii) introduce novel variants of white-box and black-box attacks, dubbed “gray-box adversarial attacks” based on which we propose novel evaluation method to assess the robustness of the learned models, and (iii) propose a novel variant of adversarial training, named “Graybox Adversarial Training” that uses intermediate versions of the models to seed the adversaries. Experimental evaluation demonstrates that the models trained using our method exhibit better robustness compared to both undefended and adversarially trained models.",
"title": ""
},
{
"docid": "2271347e3b04eb5a73466aecbac4e849",
"text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method",
"title": ""
},
{
"docid": "e6021e334415240dd813fa2baae36773",
"text": "In this study, we propose a discriminative training algorithm to jointly minimize mispronunciation detection errors (i.e., false rejections and false acceptances) and diagnosis errors (i.e., correctly pinpointing mispronunciations but incorrectly stating how they are wrong). An optimization procedure, similar to Minimum Word Error (MWE) discriminative training, is developed to refine the ML-trained HMMs. The errors to be minimized are obtained by comparing transcribed training utterances (including mispronunciations) with Extended Recognition Networks [3] which contain both canonical pronunciations and explicitly modeled mispronunciations. The ERN is compiled by handcrafted rules, or data-driven rules. Several conclusions can be drawn from the experiments: (1) data-driven rules are more effective than hand-crafted ones in capturing mispronunciations; (2) compared with the ML training baseline, discriminative training can reduce false rejections and diagnostic errors, though false acceptances increase slightly due to a small number of false-acceptance samples in the training set.",
"title": ""
},
{
"docid": "a267fadc2875fc16b69635d4592b03ae",
"text": "We investigated neural correlates of human visual orienting using event-related functional magnetic resonance imaging (fMRI). When subjects voluntarily directed attention to a peripheral location, we recorded robust and sustained signals uniquely from the intraparietal sulcus (IPs) and superior frontal cortex (near the frontal eye field, FEF). In the ventral IPs and FEF only, the blood oxygen level dependent signal was modulated by the direction of attention. The IPs and FEF also maintained the most sustained level of activation during a 7-sec delay, when subjects maintained attention at the peripheral cued location (working memory). Therefore, the IPs and FEF form a dorsal network that controls the endogenous allocation and maintenance of visuospatial attention. A separate right hemisphere network was activated by the detection of targets at unattended locations. Activation was largely independent of the target's location (visual field). This network included among other regions the right temporo-parietal junction and the inferior frontal gyrus. We propose that this cortical network is important for reorienting to sensory events.",
"title": ""
},
{
"docid": "b722f2fbdf20448e3a7c28fc6cab026f",
"text": "Alternative Mechanisms Rationale/Arguments/ Assumptions Connected Literature/Theory Resulting (Possible) Effect Support for/Against A1. Based on WTP and Exposure Theory A1a Light user segments (who are likely to have low WTP) are more likely to reduce (or even discontinue in extreme cases) their consumption of NYT content after the paywall implementation. Utility theory — WTP (Danaher 2002) Juxtaposing A1a and A1b leads to long tail effect due to the disproportionate reduction of popular content consumption (as a results of reduction of content consumption by light users). A1a. Supported (see the descriptive statistics in Table 11). A1b. Supported (see results from the postestimation of finite mixture model in Table 9) Since the resulting effects as well as both the assumptions (A1a and A1b) are supported, we suggest that there is support for this mechanism. A1b Light user segments are more likely to consume popular articles whereas the heavy user segment is more likely to consume a mix of niche articles and popular content. Exposure theory (McPhee 1963)",
"title": ""
},
{
"docid": "b816e3f9b164cdf100d9c846b79b6352",
"text": "Visualization provides a powerful means for data analysis. But to be practical, visual analytics tools must support smooth and flexible use of visualizations at a fast rate. This becomes increasingly onerous with the ever-increasing size of real-world datasets. First, large databases make interaction more difficult once query response time exceeds several seconds. Second, any attempt to show all data points will overload the visualization, resulting in chaos that will only confuse the user. Over the last few years, substantial effort has been put into addressing both of these issues and many innovative solutions have been proposed. Indeed, data visualization is a topic that is too large to be addressed in a single survey paper. Thus, we restrict our attention here to interactive visualization of large data sets. Our focus then is skewed in a natural way towards query processing problem-provided by an underlying database system-rather than to the actual data visualization problem.",
"title": ""
},
{
"docid": "9a91945c24923d571f99998aaa9a9305",
"text": "Automatic text summarization is widely regarded as the highly difficult problem, partially because of the lack of large text summarization data set. Due to the great challenge of constructing the large scale summaries for full text, in this paper, we introduce a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public1. This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. We also manually tagged the relevance of 10,666 short summaries with their corresponding short texts. Based on the corpus, we introduce recurrent neural network for the summary generation and achieve promising results, which not only shows the usefulness of the proposed corpus for short text summarization research, but also provides a baseline for further research on this topic.",
"title": ""
},
{
"docid": "bade302d28048eeb0578e5289e7dba23",
"text": "The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry. HPC Component Architecture 4",
"title": ""
},
{
"docid": "05bc0aa39909125e0350cbe5bac656ac",
"text": "This paper describes an antenna array configuration for the implementation in a UWB monopulse radar. The measurement results of the gain in the sum and difference mode are presented. Next the transformation of the monopulse technique into the time domain by the evaluation of the impulse response is shown. A look-up table with very high dynamic of over 25 dB and flat characteristic is obtained. The unambiguous range of sensing is approx. 40° in the angular direction. This novel combination of UWB technology and the monopulse radar principle allows for very precise sensing, where UWB assures high precision in the range direction and monopulse principle in the angular direction.",
"title": ""
},
{
"docid": "7c4c33097c12f55a08f8a7cc3634c5cb",
"text": "Pattern queries are widely used in complex event processing (CEP) systems. Existing pattern matching techniques, however, can provide only limited performance for expensive queries in real-world applications, which may involve Kleene closure patterns, flexible event selection strategies, and events with imprecise timestamps. To support these expensive queries with high performance, we begin our study by analyzing the complexity of pattern queries, with a focus on the fundamental understanding of which features make pattern queries more expressive and at the same time more computationally expensive. This analysis allows us to identify performance bottlenecks in processing those expensive queries, and provides key insights for us to develop a series of optimizations to mitigate those bottlenecks. Microbenchmark results show superior performance of our system for expensive pattern queries while most state-of-the-art systems suffer from poor performance. A thorough case study on Hadoop cluster monitoring further demonstrates the efficiency and effectiveness of our proposed techniques.",
"title": ""
}
] |
scidocsrr
|
8b62e8816df7c0eb19eebef1b81e8f8d
|
Color-Guided Depth Map Super Resolution Using Convolutional Neural Network
|
[
{
"docid": "0e88f1e55c4162d5778f353336ac3eb9",
"text": "Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.",
"title": ""
}
] |
[
{
"docid": "cf8915016c6a6d6537fbd368238c81f3",
"text": "A 5-year-old boy was followed up with migratory spermatic cord and a perineal tumour at the paediatric department after birth. He was born by Caesarean section at 38 weeks in viviparity. Weight at birth was 3650 g. Although a meningocele in the sacral region was found by MRI, there were no symptoms in particular and no other deformity was found. When he was 4 years old, he presented to our department with the perinal tumour. On examination, a slender scrotum-like tumour covering the centre of the perineal lesion, along with inflammation and ulceration around the skin of the anus, was observed. Both testes and scrotums were observed in front of the tumour (Figure 1a). An excision of the tumour and Z-plasty of the perineal lesion were performed. The subcutaneous tissue consisted of adipose tissue-like lipoma and was resected along with the tumour (Figure 1b). A Z-plasty was carefully performed in order to maintain the lefteright symmetry of the",
"title": ""
},
{
"docid": "ab932e771c091f0f862dcedb96d6d202",
"text": "We present a simple and effective architecture for fine-grained visual recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs belong to the class of orderless texture representations but unlike prior work they can be trained in an end-to-end manner. Our most accurate model obtains 84.1%, 79.4%, 86.9% and 91.3% per-image accuracy on the Caltech-UCSD birds [67], NABirds [64], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at http://vis-www.cs.umass.edu/bcnn.",
"title": ""
},
{
"docid": "bfffa7e471235f2fcb611f5d39fb77a4",
"text": "We describe a parallel implementation of a block triangular preconditioner based on the modified augmented Lagrangian approach to the steady incompressible Navier–Stokes equations. The equations are linearized by Picard iteration and discretized with various finite element and finite difference schemes on two- and three-dimensional domains. We report strong scalability results for up to 64 cores.",
"title": ""
},
{
"docid": "042be1fb7939384cf03ecd354f10e35f",
"text": "Text mining is a flexible technology that can be applied to numerous different tasks in biology and medicine. We present a system for extracting disease-gene associations from biomedical abstracts. The system consists of a highly efficient dictionary-based tagger for named entity recognition of human genes and diseases, which we combine with a scoring scheme that takes into account co-occurrences both within and between sentences. We show that this approach is able to extract half of all manually curated associations with a false positive rate of only 0.16%. Nonetheless, text mining should not stand alone, but be combined with other types of evidence. For this reason, we have developed the DISEASES resource, which integrates the results from text mining with manually curated disease-gene associations, cancer mutation data, and genome-wide association studies from existing databases. The DISEASES resource is accessible through a web interface at http://diseases.jensenlab.org/, where the text-mining software and all associations are also freely available for download.",
"title": ""
},
{
"docid": "85da95f8d04a8c394c320d2cce25a606",
"text": "Improved numerical weather prediction simulations have led weather services to examine how and where human forecasters add value to forecast production. The Forecast Production Assistant (FPA) was developed with that in mind. The authors discuss the Forecast Generator (FOG), the first application developed on the FPA. FOG is a bilingual report generator that produces routine and special purpose forecast directly from the FPA's graphical weather predictions. Using rules and a natural-language generator, FOG converts weather maps into forecast text. The natural-language issues involved are relevant to anyone designing a similar system.<<ETX>>",
"title": ""
},
{
"docid": "26e60be4012b20575f3ddee16f046daa",
"text": "Natural scene character recognition is challenging due to the cluttered background, which is hard to separate from text. In this paper, we propose a novel method for robust scene character recognition. Specifically, we first use robust principal component analysis (PCA) to denoise character image by recovering the missing low-rank component and filtering out the sparse noise term, and then use a simple Histogram of oriented Gradient (HOG) to perform image feature extraction, and finally, use a sparse representation based classifier for recognition. In experiments on four public datasets, namely the Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT) dataset and IIIT5K-word dataset, our method was demonstrated to be competitive with the state-of-the-art methods.",
"title": ""
},
{
"docid": "654e6d2e1d1160a6dd7180abcce0f8bd",
"text": "E-government research has become a recognized research domain and many policies and strategies are formulated for e-government implementations. Most of these target the next few years and limited attention has been giving to the long term. The eGovRTD2020, a European Commission co-funded project, investigated the future research on e-government driven by changing circumstances and the evolution of technology. This project consists of an analysis of the state of play, a scenario-building, a gap analysis and a roadmapping activity. In this paper the roadmapping methodology fitting the unique characteristics of the e-government field is presented and the results are briefly discussed. The use of this methodology has resulted in the identification of a large number of e-government research themes. It was found that a roadmapping methodology should match the unique characteristics of e-government. The research shows the need of multidisciplinary research.",
"title": ""
},
{
"docid": "90216e972141f4a8154609b4ce43c15e",
"text": "Recently, we proposed and developed the context-dependent deep neural network hidden Markov models (CD-DNN-HMMs) for large vocabulary speech recognition and achieved highly promising recognition results including over one third fewer word errors than the discriminatively trained, conventional HMM-based systems on the 300hr Switchboard benchmark task. In this paper, we extend DNNs to deep tensor neural networks (DTNNs) in which one or more layers are double-projection and tensor layers. The basic idea of the DTNN comes from our realization that many factors interact with each other to predict the output. To represent these interactions, we project the input to two nonlinear subspaces through the double-projection layer and model the interactions between these two subspaces and the output neurons through a tensor with three-way connections. Evaluation on 30hr Switchboard task indicates that DTNNs can outperform DNNs with similar number of parameters with 5% relative word error reduction.",
"title": ""
},
{
"docid": "7fb2348fbde9dbef88357cc79ff394c5",
"text": "This paper presents a measurement system with capacitive sensor connected to an open-source electronic platform Arduino Uno. A simple code was modified in the project, which ensures that the platform works as interface for the sensor. The code can be modified and upgraded at any time to fulfill other specific applications. The simulations were carried out in the platform's own environment and the collected data are represented in graphical form. Accuracy of developed measurement platform is 0.1 pF.",
"title": ""
},
{
"docid": "0acb5c6a415ab6eba1ae7f5fd7e74e97",
"text": "Fingerprints, the oldest and most widespread biometric identification system are commonly used for criminal investigation in forensic Science; there is minute statistical theory on the Rareness of fingerprint minutiae. A critical step in studying the statistics of Fingerprint minutiae is to reliably extract minutiae from the fingerprint images. However, fingerprint images are rarely of perfect quality. They may be degraded and corrupted due to variations in skin and impression conditions. Thus, image Enhancement techniques are employed prior to minutiae extraction to obtain a more reliable estimation of minutiae locations.",
"title": ""
},
{
"docid": "e2dbb82a3b51102c2fdf0de1af3735f0",
"text": "In the past two decades, we have witnessed significant progress in developing high performance stimuli-responsive polymeric materials. This review focuses on recent developments in the preparation and application of patterned stimuli-responsive polymers, including thermoresponsive layers, pH/ionic-responsive hydrogels, photo-responsive film, magnetically-responsive composites, electroactive composites, and solvent-responsive composites. Many important new applications for stimuli-responsive polymers lie in the field of nano- and micro-fabrication, where stimuli-responsive polymers are being established as important manipulation tools. Some techniques have been developed to selectively position organic molecules and then to obtain well-defined patterned substrates at the micrometer or submicrometer scale. Methods for patterning of stimuli-responsive hydrogels, including photolithography, electron beam lithography, scanning probe writing, and printing techniques (microcontact printing, ink-jet printing) were surveyed. We also surveyed the applications of nanostructured stimuli-responsive hydrogels, such as biotechnology (biological interfaces and purification of biomacromoles), switchable wettability, sensors (optical sensors, biosensors, chemical sensors), and actuators.",
"title": ""
},
{
"docid": "aa5daa83656a2265dc27ec6ee5e3c1cb",
"text": "Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGCbased customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely morevaluable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).",
"title": ""
},
{
"docid": "d51831fd0aef7085c6e3c33d1c4d2c92",
"text": "Ransomware continues to be one of the most crucial cyber threats and is actively threatening IT users around the world. In recent years, it has become a phenomenon and traumatic threat to individuals, governments and organizations. Ransomwares not only penalized computational operations, it also mercilessly extorts huge amount of money from the victims if the victims want to regain back access to the system and files. As such, the cybercriminals are making millions of profits and keep on spreading new variants of ransomware. This paper discusses about ransomware and some related works in fighting this threat.",
"title": ""
},
{
"docid": "c6780317e8b4b41a27d8be813d51e050",
"text": "The neural mechanisms by which intentions are transformed into actions remain poorly understood. We investigated the network mechanisms underlying spontaneous voluntary decisions about where to focus visual-spatial attention (willed attention). Graph-theoretic analysis of two independent datasets revealed that regions activated during willed attention form a set of functionally-distinct networks corresponding to the frontoparietal network, the cingulo-opercular network, and the dorsal attention network. Contrasting willed attention with instructed attention (where attention is directed by external cues), we observed that the dorsal anterior cingulate cortex was allied with the dorsal attention network in instructed attention, but shifted connectivity during willed attention to interact with the cingulo-opercular network, which then mediated communications between the frontoparietal network and the dorsal attention network. Behaviorally, greater connectivity in network hubs, including the dorsolateral prefrontal cortex, the dorsal anterior cingulate cortex, and the inferior parietal lobule, was associated with faster reaction times. These results, shown to be consistent across the two independent datasets, uncover the dynamic organization of functionally-distinct networks engaged to support intentional acts.",
"title": ""
},
{
"docid": "7b1f880c76d50f9bdec264ac589424c0",
"text": "In the software design, protecting a computer system from a plethora of software attacks or malware in the wild has been increasingly important. One branch of research to detect the existence of attacks or malware, there has been much work focused on modeling the runtime behavior of a program. Stemming from the seminal work of Forrest et al., one of the main tools to model program behavior is system call sequences. Unfortunately, however, since mimicry attacks were proposed, program behavior models based solely on system call sequences could no longer ensure the security of systems and require additional information that comes with its own drawbacks. In this paper, we report our preliminary findings in our research to build a mimicry resilient program behavior model that has lesser drawbacks. We employ branch sequences to harden our program behavior model against mimicry attacks while employing hardware features for efficient extraction of such branch information during program runtime. In order to handle the large scale of branch sequences, we also employ LSTM, the de facto standard in deep learning based sequence modeling and report our preliminary experiments on its interaction with program branch sequences.",
"title": ""
},
{
"docid": "a66dd42b9d9b8912726e278e4f2da411",
"text": "A significant amount of marine debris has accumulated in the North Pacific Central Gyre (NPCG). The effects on larger marine organisms have been documented through cases of entanglement and ingestion; however, little is known about the effects on lower trophic level marine organisms. This study is the first to document ingestion and quantify the amount of plastic found in the gut of common planktivorous fish in the NPCG. From February 11 to 14, 2008, 11 neuston samples were collected by manta trawl in the NPCG. Plastic from each trawl and fish stomach was counted and weighed and categorized by type, size class and color. Approximately 35% of the fish studied had ingested plastic, averaging 2.1 pieces per fish. Additional studies are needed to determine the residence time of ingested plastics and their effects on fish health and the food chain implications.",
"title": ""
},
{
"docid": "d10ec03d91d58dd678c995ec1877c710",
"text": "Major depressive disorders, long considered to be of neurochemical origin, have recently been associated with impairments in signaling pathways that regulate neuroplasticity and cell survival. Agents designed to directly target molecules in these pathways may hold promise as new therapeutics for depression.",
"title": ""
},
{
"docid": "7b6c039783091260cee03704ce9748d8",
"text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.",
"title": ""
},
{
"docid": "d80fc668073878c476bdf3997b108978",
"text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system",
"title": ""
}
] |
scidocsrr
|
db3fa632649ce3300d1397b4b7f5efdc
|
An Analysis on Time- and Session-aware Diversification in Recommender Systems
|
[
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "8c07982729ca439c8e346cbe018a7198",
"text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.",
"title": ""
},
{
"docid": "841a5ecba126006e1deb962473662788",
"text": "In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.",
"title": ""
},
{
"docid": "539a25209bf65c8b26cebccf3e083cd0",
"text": "We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.",
"title": ""
}
] |
[
{
"docid": "f69723ed73c7edd9856883bbb086ed0c",
"text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.",
"title": ""
},
{
"docid": "a6a98545230e6dd5c87948f5b000a076",
"text": "The Traveling Salesman Problem (TSP) is one of the standard test problems used in performance analysis of discrete optimization algorithms. The Ant Colony Optimization (ACO) algorithm appears among heuristic algorithms used for solving discrete optimization problems. In this study, a new hybrid method is proposed to optimize parameters that affect performance of the ACO algorithm using Particle Swarm Optimization (PSO). In addition, 3-Opt heuristic method is added to proposed method in order to improve local solutions. The PSO algorithm is used for detecting optimum values of parameters ̨ and ˇ which are used for city selection operations in the ACO algorithm and determines significance of inter-city pheromone and distances. The 3-Opt algorithm is used for the purpose of improving city selection operations, which could not be improved due to falling in local minimums by the ACO algorithm. The performance of proposed hybrid method is investigated on ten different benchmark problems taken from literature and it is compared to the performance of some well-known algorithms. Experimental results show that the performance of proposed method by using fewer ants than the number of cities for the TSPs is better than the performance of compared methods in most cases in terms of solution quality and robustness. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "75fb9b4adf41c0a93f72084cc3a7444a",
"text": "OBJECTIVE\nIn this study, we tested an expanded model of Kanter's structural empowerment, which specified the relationships among structural and psychological empowerment, job strain, and work satisfaction.\n\n\nBACKGROUND\nStrategies proposed in Kanter's empowerment theory have the potential to reduce job strain and improve employee work satisfaction and performance in current restructured healthcare settings. The addition to the model of psychological empowerment as an outcome of structural empowerment provides an understanding of the intervening mechanisms between structural work conditions and important organizational outcomes.\n\n\nMETHODS\nA predictive, nonexperimental design was used to test the model in a random sample of 404 Canadian staff nurses. The Conditions of Work Effectiveness Questionnaire, the Psychological Empowerment Questionnaire, the Job Content Questionnaire, and the Global Satisfaction Scale were used to measure the major study variables.\n\n\nRESULTS\nStructural equation modelling analyses revealed a good fit of the hypothesized model to the data based on various fit indices (chi 2 = 1140, df = 545, chi 2/df ratio = 2.09, CFI = 0.986, RMSEA = 0.050). The amount of variance accounted for in the model was 58%. Staff nurses felt that structural empowerment in their workplace resulted in higher levels of psychological empowerment. These heightened feelings of psychological empowerment in turn strongly influenced job strain and work satisfaction. However, job strain did not have a direct effect on work satisfaction.\n\n\nCONCLUSIONS\nThese results provide initial support for an expanded model of organizational empowerment and offer a broader understanding of the empowerment process.",
"title": ""
},
{
"docid": "f3e5941be4543d5900d56c1a7d93d0ea",
"text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.",
"title": ""
},
{
"docid": "425c96a3ed2d88bbc9324101626c992d",
"text": "Nonlocal image representation or group sparsity has attracted considerable interest in various low-level vision tasks and has led to several state-of-the-art image denoising techniques, such as BM3D, learned simultaneous sparse coding. In the past, convex optimization with sparsity-promoting convex regularization was usually regarded as a standard scheme for estimating sparse signals in noise. However, using convex regularization cannot still obtain the correct sparsity solution under some practical problems including image inverse problems. In this letter, we propose a nonconvex weighted <inline-formula><tex-math notation=\"LaTeX\">$\\ell _p$</tex-math></inline-formula> minimization based group sparse representation framework for image denoising. To make the proposed scheme tractable and robust, the generalized soft-thresholding algorithm is adopted to solve the nonconvex <inline-formula><tex-math notation=\"LaTeX\"> $\\ell _p$</tex-math></inline-formula> minimization problem. In addition, to improve the accuracy of the nonlocal similar patch selection, an adaptive patch search scheme is proposed. Experimental results demonstrate that the proposed approach not only outperforms many state-of-the-art denoising methods such as BM3D and weighted nuclear norm minimization, but also results in a competitive speed.",
"title": ""
},
{
"docid": "4dfb5d8dfb09f510427aa6400b1f330f",
"text": "In this paper, a permanent magnet synchronous motor for ship propulsion is designed. The appropriate number of poles and slots are selected and the cogging torque is minimized in order to reduce noise and vibrations. To perform high efficiency and reliability, the inverter system consists of multiple modules and the stator coil has multi phases and groups. Because of the modular structure, the motor can be operated with some damaged inverters. In order to maintain high efficiency at low speed operation, same phase coils of different group are connected in series and excited by the half number of inverters than at high speed operation. A MW-class motor is designed and the performances with the proposed inverter control method are calculated.",
"title": ""
},
{
"docid": "be447131554900aaba025be449944613",
"text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.",
"title": ""
},
{
"docid": "19b8acf4e5c68842a02e3250c346d09b",
"text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.",
"title": ""
},
{
"docid": "ada7b43edc18b321c57a978d7a3859ae",
"text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.",
"title": ""
},
{
"docid": "ffdee20af63d50f39f9cc5077a14dc87",
"text": "Recent advancement in remote sensing facilitates collection of hyperspectral images (HSIs) in hundreds of bands which provides a potential platform to detect and identify the unique trends in land and atmospheric datasets with high accuracy. But along with the detailed information, HSIs also pose several processing problems such as1) increase in computational complexity due to high dimensionality. So dimension reduction without losing information is one of the major concerns in this area and 2) limited availability of labeled training sets causes the ill posed problem which is needed to be addressed by the classification algorithms. Initially classification techniques of HSIs were based on spectral information only. Gradually researchers started utilizing both spectral and spatial information to increase classification accuracy. Also the classification algorithms have evolved from supervised to semi supervised mode. This paper presents a survey about the techniques available in the field of HSI processing to provide a seminal view of how the field of HSI analysis has evolved over the last few decades and also provides a snapshot of the state of the art techniques used in this area. General Terms Classification algorithms, image processing, supervised, semi supervised techniques.",
"title": ""
},
{
"docid": "38a1ed4d7147a48758c1a03c5c136457",
"text": "The Penrose inequality gives a lower bound for the total mass of a spacetime in terms of the area of suitable surfaces that represent black holes. Its validity is supported by the cosmic censorship conjecture and therefore its proof (or disproof) is an important problem in relation with gravitational collapse. The Penrose inequality is a very challenging problem in mathematical relativity and it has received continuous attention since its formulation by Penrose in the early seventies. Important breakthroughs have been made in the last decade or so, with the complete resolution of the so-called Riemannian Penrose inequality and a very interesting proposal to address the general case by Bray and Khuri. In this paper, the most important results on this field will be discussed and the main ideas behind their proofs will be summarized, with the aim of presenting what is the status of our present knowledge in this topic.",
"title": ""
},
{
"docid": "ebea79abc60a5d55d0397d21f54cc85e",
"text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.",
"title": ""
},
{
"docid": "3d1fa2e999a2cc54b3c1ec98d121e9fb",
"text": "Model-based design is a powerful design technique for cyber-physical systems, but too often literature assumes knowledge of a methodology without reference to an explicit design process, instead focusing on isolated steps such as simulation, software synthesis, or verification. We combine these steps into an explicit and holistic methodology for model-based design of cyber-physical systems from abstraction to architecture, and from concept to realization. We decompose model-based design into ten fundamental steps, describe and evaluate an iterative design methodology, and evaluate this methodology in the development of a cyber-physical system.",
"title": ""
},
{
"docid": "46ea713c4206d57144350a7871433392",
"text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.",
"title": ""
},
{
"docid": "253b2696bb52f43528f02e85d1070e96",
"text": "Prosocial behavior consists of behaviors regarded as beneficial to others, including helping, sharing, comforting, guiding, rescuing, and defending others. Although women and men are similar in engaging in extensive prosocial behavior, they are different in their emphasis on particular classes of these behaviors. The specialty of women is prosocial behaviors that are more communal and relational, and that of men is behaviors that are more agentic and collectively oriented as well as strength intensive. These sex differences, which appear in research in various settings, match widely shared gender role beliefs. The origins of these beliefs lie in the division of labor, which reflects a biosocial interaction between male and female physical attributes and the social structure. The effects of gender roles on behavior are mediated by hormonal processes, social expectations, and individual dispositions.",
"title": ""
},
{
"docid": "abed12088956b9b695a0d5a158dc1f71",
"text": "Neural encoding of pitch in the auditory brainstem is known to be shaped by long-term experience with language or music, implying that early sensory processing is subject to experience-dependent neural plasticity. In language, pitch patterns consist of sequences of continuous, curvilinear contours; in music, pitch patterns consist of relatively discrete, stair-stepped sequences of notes. The primary aim was to determine the influence of domain-specific experience (language vs. music) on the encoding of pitch in the brainstem. Frequency-following responses were recorded from the brainstem in native Chinese, English amateur musicians, and English nonmusicians in response to iterated rippled noise homologues of a musical pitch interval (major third; M3) and a lexical tone (Mandarin tone 2; T2) from the music and language domains, respectively. Pitch-tracking accuracy (whole contour) and pitch strength (50 msec sections) were computed from the brainstem responses using autocorrelation algorithms. Pitch-tracking accuracy was higher in the Chinese and musicians than in the nonmusicians across domains. Pitch strength was more robust across sections in musicians than in nonmusicians regardless of domain. In contrast, the Chinese showed larger pitch strength, relative to nonmusicians, only in those sections of T2 with rapid changes in pitch. Interestingly, musicians exhibited greater pitch strength than the Chinese in one section of M3, corresponding to the onset of the second musical note, and two sections within T2, corresponding to a note along the diatonic musical scale. We infer that experience-dependent plasticity of brainstem responses is shaped by the relative saliency of acoustic dimensions underlying the pitch patterns associated with a particular domain.",
"title": ""
},
{
"docid": "7d0fb12fce0ef052684a8664a3f5c543",
"text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.",
"title": ""
},
{
"docid": "3d0b507f18dca7e2710eab5fdaa9a20b",
"text": "This paper is designed to illustrate and consider the relations between three types of metarepresentational ability used in verbal comprehension: the ability to metarepresent attributed thoughts, the ability to metarepresent attributed utterances, and the ability to metarepresent abstract, non-attributed representations (e.g. sentence types, utterance types, propositions). Aspects of these abilities have been separ at ly considered in the literatures on “theory of mind”, Gricean pragmatics and quotation. The aim of this paper is to show how the results of these separate strands of research might be integrated with an empirically plausible pragmatic theory.",
"title": ""
},
{
"docid": "6f845762227f11525173d6d0869f6499",
"text": "We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement the Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.",
"title": ""
},
{
"docid": "f37d9a57fd9100323c70876cf7a1d7ad",
"text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
ea63ffef027e93ed1e8d86b6235ccebf
|
Samba: a smartphone-based robot system for energy-efficient aquatic environment monitoring
|
[
{
"docid": "02df2dde321bb81220abdcff59418c66",
"text": "Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption.",
"title": ""
}
] |
[
{
"docid": "d405fc2bcbdc8f65584b7977b2442d56",
"text": "Financial Industry Studies is published by the Federal Reserve Bank of Dallas. The views expressed are those of the authors and should not be attributed to the Federal Reserve Bank of Dallas or the Federal Reserve System. Articles may be reprinted on the condition that the source is credited and a copy of the publication containing the reprinted article is provided to the Financial Industry Studies Department of the Federal Reserve Bank of Dallas.",
"title": ""
},
{
"docid": "a25250c92960718e2c3a0faf404702f5",
"text": "Research consistently demonstrates that intensive care unit (ICU) patients experience pain, discomfort and anxiety despite analgesic and sedative use. The most painful procedure reported by critically ill patients is being turned. Music diminishes anxiety and discomfort in some populations; however, its effect on critically ill patients remains unknown. This research aimed to identify the effect of music on discomfort experienced by ICU patients during turning using a single blind randomized cross-over design. Seventeen post-operative ICU patients were recruited and treatment order randomized. Discomfort and anxiety were measured 15 min before and immediately after two turning procedures. Findings indicated that listening to music 15 min before and during turning did not significantly reduce discomfort or anxiety. Pain management might effectively be addressing discomfort and anxiety experienced during turning. Given previous studies have identified turning as painful, current results are promising and it might be useful to determine if this is widespread.",
"title": ""
},
{
"docid": "1d9e5ea84617c934083f607561a196e0",
"text": "Coherent optical OFDM (CO-OFDM) has recently been proposed and the proof-of-concept transmission experiments have shown its extreme robustness against chromatic dispersion and polarization mode dispersion. In this paper, we first review the theoretical fundamentals for CO-OFDM and its channel model in a 2x2 MIMO-OFDM representation. We then present various design choices for CO-OFDM systems and perform the nonlinearity analysis for RF-to-optical up-converter. We also show the receiver-based digital signal processing to mitigate self-phase-modulation (SPM) and Gordon-Mollenauer phase noise, which is equivalent to the midspan phase conjugation.",
"title": ""
},
{
"docid": "878292ad8dfbe9118c64a14081da561a",
"text": "Public-key cryptography is indispensable for cyber security. However, as a result of Peter Shor shows, the public-key schemes that are being used today will become insecure once quantum computers reach maturity. This paper gives an overview of the alternative public-key schemes that have the capability to resist quantum computer attacks and compares them.",
"title": ""
},
{
"docid": "4b68d3c94ef785f80eac9c4c6ca28cfe",
"text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.",
"title": ""
},
{
"docid": "e1c298ea1c0a778a91e302202b8e1463",
"text": "Computational topology has recently seen an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and that persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.",
"title": ""
},
{
"docid": "a5bcf7789a71f3ba690da0469923b3b1",
"text": "Traditionally, data cleaning has been performed as a pre-processing task: after all data are selected for a study (or application), they are cleaned and loaded into a database or data warehouse. In this paper, we argue that data cleaning should be an integral part of data exploration. Especially for complex, spatio-temporal data, it is only by exploring a dataset that one can discover which constraints should be checked. In addition, in many instances, seemingly erroneous data may actually reflect interesting features. Distinguishing a feature from a data quality issue requires detailed analyses which often includes bringing in new datasets. We present a series of case studies using the NYC taxi data that illustrate data cleaning challenges that arise for spatial-temporal urban data and suggest methodologies to address these challenges.",
"title": ""
},
{
"docid": "4e7122172cb7c37416381c251b510948",
"text": "Anatomic and physiologic data are used to analyze the energy expenditure on different components of excitatory signaling in the grey matter of rodent brain. Action potentials and postsynaptic effects of glutamate are predicted to consume much of the energy (47% and 34%, respectively), with the resting potential consuming a smaller amount (13%), and glutamate recycling using only 3%. Energy usage depends strongly on action potential rate--an increase in activity of 1 action potential/cortical neuron/s will raise oxygen consumption by 145 mL/100 g grey matter/h. The energy expended on signaling is a large fraction of the total energy used by the brain; this favors the use of energy efficient neural codes and wiring patterns. Our estimates of energy usage predict the use of distributed codes, with <or=15% of neurons simultaneously active, to reduce energy consumption and allow greater computing power from a fixed number of neurons. Functional magnetic resonance imaging signals are likely to be dominated by changes in energy usage associated with synaptic currents and action potential propagation.",
"title": ""
},
{
"docid": "bf50151700f0e286ee5aa3a2bd74c249",
"text": "Computer systems that augment the process of finding the right expert for a given problem in an organization or world-wide are becoming feasible more than ever before, thanks to the prevalence of corporate Intranets and the Internet. This paper investigates such systems in two parts. We first explore the expert finding problem in depth, review and analyze existing systems in this domain, and suggest a domain model that can serve as a framework for design and development decisions. Based on our analyses of the problem and solution spaces, we then bring to light the gaps that remain to be addressed. Finally, we present our approach called DEMOIR, which is a modular architecture for expert finding systems that is based on a centralized expertise modeling server while also incorporating decentralized components for expertise information gathering and exploitation.",
"title": ""
},
{
"docid": "ef787cfc1b00c9d05ec9293ff802f172",
"text": "High Definition (HD) maps play an important role in modern traffic scenes. However, the development of HD maps coverage grows slowly because of the cost limitation. To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet. It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset. And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications. Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time. And the maps can be constructed precisely even with inaccurate crowdsourced data.",
"title": ""
},
{
"docid": "8f2c7770fdcd9bfe6a7e9c6e10569fc7",
"text": "The purpose of this paper is to explore the importance of Information Technology (IT) Governance models for public organizations and presenting an IT Governance model that can be adopted by both practitioners and researchers. A review of the literature in IT Governance has been initiated to shape the intended theoretical background of this study. The systematic literature review formalizes a richer context for the IT Governance concept. An empirical survey, using a questionnaire based on COBIT 4.1 maturity model used to investigate IT Governance practice in multiple case studies from Kingdom of Bahrain. This method enabled the researcher to gain insights to evaluate IT Governance practices. The results of this research will enable public sector organizations to adopt an IT Governance model in a simple and dynamic manner. The model provides a basic structure of a concept; for instance, this allows organizations to gain a better perspective on IT Governance processes and provides a clear focus for decision-making attention. IT Governance model also forms as a basis for further research in IT Governance adoption models and bridges the gap between conceptual frameworks, real life and functioning governance.",
"title": ""
},
{
"docid": "c3d1470f049b9531c3af637408f5f9cb",
"text": "Information and communication technology (ICT) is integral in today’s healthcare as a critical piece of support to both track and improve patient and organizational outcomes. Facilitating nurses’ informatics competency development through continuing education is paramount to enhance their readiness to practice safely and accurately in technologically enabled work environments. In this article, we briefly describe progress in nursing informatics (NI) and share a project exemplar that describes our experience in the design, implementation, and evaluation of a NI educational event, a one-day boot camp format that was used to provide foundational knowledge in NI targeted primarily at frontline nurses in Alberta, Canada. We also discuss the project outcomes, including lessons learned and future implications. Overall, the boot camp was successful to raise nurses’ awareness about the importance of informatics in nursing practice.",
"title": ""
},
{
"docid": "068be5b13515937ed76592bf8a9782ce",
"text": "We outline the core components of a modulation recognition system that uses hierarchical deep neural networks to identify data type, modulation class and modulation order. Our system utilizes a flexible front-end detector that performs energy detection, channelization and multi-band reconstruction on wideband data to provide raw narrowband signal snapshots. We automatically extract features from these snapshots using convolutional neural network layers, which produce decision class estimates. Initial experimentation on a small synthetic radio frequency dataset indicates the viability of deep neural networks applied to the communications domain. We plan to demonstrate this system at the Battle of the Mod Recs Workshop at IEEE DySpan 2017.",
"title": ""
},
{
"docid": "e96cf46cc99b3eff60d32f3feb8afc47",
"text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "1f69b9ae8d8c140079e3e5e39cdbb4c7",
"text": "Text Summarization produces a shorter version of large text documents by selecting most relevant information. Text summarization systems are of two types: extractive and abstractive. This paper focuses on extractive text summarization. In extractive text summarization, important sentences are selected based on certain important features. The importance of some extractive features is more than the some other features, so they should have the balance weight in computations. The purpose of this paper is to use fuzzy logic and wordnet synonyms to handle the issue of ambiguity and imprecise values with the traditional two value or multi-value logic and to consider the semantics of the text. Three different methods: fuzzy logic based method, bushy path method, and wordnet synonyms method are used to generate 3 summaries. Final summary is generated by selecting common sentences from all the 3 summaries and from rest of the sentences in union of all summaries, selection is done based on sentence location. The proposed methodology is compared with three individual methods i.e. fuzzy logic based summarizer, bushy path summarizer, and wordnet synonyms summarizer by evaluating the performance of each on 95 documents from standard DUC 2002 dataset using ROUGE evaluation metrics. The analysis shows that the proposed method gives better average precision, recall, and f-measure.",
"title": ""
},
{
"docid": "d799257d4a78401bf25e492250b64da8",
"text": "We examined anticipatory mechanisms of reward-motivated memory formation using event-related FMRI. In a monetary incentive encoding task, cues signaled high- or low-value reward for memorizing an upcoming scene. When tested 24 hr postscan, subjects were significantly more likely to remember scenes that followed cues for high-value rather than low-value reward. A monetary incentive delay task independently localized regions responsive to reward anticipation. In the encoding task, high-reward cues preceding remembered but not forgotten scenes activated the ventral tegmental area, nucleus accumbens, and hippocampus. Across subjects, greater activation in these regions predicted superior memory performance. Within subject, increased correlation between the hippocampus and ventral tegmental area was associated with enhanced long-term memory for the subsequent scene. These findings demonstrate that brain activation preceding stimulus encoding can predict declarative memory formation. The findings are consistent with the hypothesis that reward motivation promotes memory formation via dopamine release in the hippocampus prior to learning.",
"title": ""
},
{
"docid": "8d26fc4b31ca7bd2c461483852e70626",
"text": "The pili from pathogenic Escherichia coli isolates 566, 1794 and TK3 of chicken and turkey origin were purified. After mechanic detachment from the bacterial cells, the pili were concentrated by precipitation with ammonium sulfate, dialyzed, and solubilized in buffer containing deoxycholate. The fraction containing the pilus was purified further by ultracentrifugation in a sucrose gradient. After ultracentrifugation, the pili at the density of 1.10 to 1.15 g.cm-3 (between 10%-20% of sucrose gradients) were collected, and the purified pili from strain 566, 1794 and TK3 had an apparent molecular weight of 17,500, 17,000 and 17,000 respectively, which retained their ability to bind the erythrocyte in a mannose-inhibitable fashion. Hyperimmunesera raised in BALB/C mice against the purified pili from strain 1794 reacted positively with type 1 pili from both isolates 566 and TK3 by immuno blot. These results revealed that the three strains either Chinese or north american isolates expressed type 1 pili which had molecular weights from 17,000 to 17,500, and they have common antigenic epitopes.",
"title": ""
},
{
"docid": "03a036bea8fac6b1dfa7d9a4783eef66",
"text": "Face recognition from the real data, capture images, sensor images and database images is challenging problem due to the wide variation of face appearances, illumination effect and the complexity of the image background. Face recognition is one of the most effective and relevant applications of image processing and biometric systems. In this paper we are discussing the face recognition methods, algorithms proposed by many researchers using artificial neural networks (ANN) which have been used in the field of image processing and pattern recognition. How ANN will used for the face recognition system and how it is effective than another methods will also discuss in this paper. There are many ANN proposed methods which give overview face recognition using ANN. Therefore, this research includes a general review of face detection studies and systems which based on different ANN approaches and algorithms. The strengths and limitations of these literature studies and systems were included, and also the performance analysis of different ANN approach and algorithm is analysing in this research study.",
"title": ""
},
{
"docid": "1de42678c009d31c782c1cf821c90bc7",
"text": "Median filtering detection has recently drawn much attention in image editing and image anti-forensic techniques. Current image median filtering forensics algorithms mainly extract features manually. To deal with the challenge of detecting median filtering from small-size and compressed image blocks, by taking into account of the properties of median filtering, we propose a median filtering detection method based on convolutional neural networks (CNNs), which can automatically learn and obtain features directly from the image. To our best knowledge, this is the first work of applying CNNs in median filtering image forensics. Unlike conventional CNN models, the first layer of our CNN framework is a filter layer that accepts an image as the input and outputs its median filtering residual (MFR). Then, via alternating convolutional layers and pooling layers to learn hierarchical representations, we obtain multiple features for further classification. We test the proposed method on several experiments. The results show that the proposed method achieves significant performance improvements, especially in the cut-and-paste forgery detection.",
"title": ""
},
{
"docid": "61bf6627b829c77a1223c1e219c4d268",
"text": "Haptic exploration of unknown objects is of great importance for acquiring multi-modal object representations, which enable a humanoid robot to autonomously execute grasping and manipulation tasks. In this paper we present a tactile exploration strategy to guide an anthropomorphic five-finger hand along the surface of previously unknown objects and build a 3D object representation based on acquired tactile point clouds. The proposed strategy makes use of the dynamic potential field approach suggested in the context of mobile robot navigation. To demonstrate the capabilities of this strategy, we conduct experiments in a detailed physics simulation using a model of the five-finger hand. Exploration results of several test objects are given.",
"title": ""
}
] |
scidocsrr
|
33768661503266e2e9a0028aa2bb4ff9
|
Processing and Normalizing Hashtags
|
[
{
"docid": "6c647c3260c0a31cac1a3cd412919aad",
"text": "Twitter is a micro-blogging site that allows users and companies to post brief pieces of information called Tweets . Some of the tweets contain keywords such as Hashtags denoted with a # , essentially one word summaries of either the topic or emotion of the tweet. The goal of this paper is to examine an approach to perform hashtag discovery on Twitter posts that do not contain user labeled hashtags. The process described in this paper is geared to be as automatic as possible, taking advantage of web information, sentiment analysis, geographic location, basic filtering and classification processes, to generate hashtags for tweets. Hashtags provide users and search queries a fast and simple basis to filter and find information that they are interested in.",
"title": ""
}
] |
[
{
"docid": "2960e702b0c764de558a2f723c13196a",
"text": "The main information of a webpage is usually mixed between menus, advertisements, panels, and other not necessarily related information; and it is often difficult to automatically isolate this information. This is precisely the objective of content extraction, a research area of widely interest due to its many applications. Content extraction is useful not only for the final human user, but it is also frequently used as a preprocessing stage of different systems that need to extract the main content in a web document to avoid the treatment and processing of other useless information. Other interesting application where content extraction is particularly used is displaying webpages in small screens such as mobile phones or PDAs. In this work we present a new technique for content extraction that uses the DOM tree of the webpage to analyze the hierarchical relations of the elements in the webpage. Thanks to this information, the technique achieves a considerable recall and precision. Using the DOM structure for content extraction gives us the benefits of other approaches based on the syntax of the webpage (such as characters, words and tags), but it also gives us a very precise information regarding the related components in a block, thus, producing very cohesive blocks.",
"title": ""
},
{
"docid": "2a4e5635e2c15ce8ed84e6e296c4bbf4",
"text": "The games with a purpose paradigm proposed by Luis von Ahn [9] is a new approach for game design where useful but boring tasks, like labeling a random image found in the web, are packed within a game to make them entertaining. But there are not only large numbers of internet users that can be used as voluntary data producers but legions of mobile device owners, too. In this paper we describe the design of a location-based mobile game with a purpose: CityExplorer. The purpose of this game is to produce geospatial data that is useful for non-gaming applications like a location-based service. From the analysis of four use case studies of CityExplorer we report that such a purposeful game is entertaining and can produce rich geospatial data collections.",
"title": ""
},
{
"docid": "74f674ddfd04959303bb89bd6ef22b66",
"text": "Ethernet is the survivor of the LAN wars. It is hard to find an IP packet that has not passed over an Ethernet segment. One important reason for this is Ethernet's simplicity and ease of configuration. However, Ethernet has always been known to be an insecure technology. Recent successful malware attacks and the move towards cloud computing in data centers demand that attention be paid to the security aspects of Ethernet. In this paper, we present known Ethernet related threats and discuss existing solutions from business, hacker, and academic communities. Major issues, like insecurities related to Address Resolution Protocol and to self-configurability, are discussed. The solutions fall roughly into three categories: accepting Ethernet's insecurity and circling it with firewalls; creating a logical separation between the switches and end hosts; and centralized cryptography based schemes. However, none of the above provides the perfect combination of simplicity and security befitting Ethernet.",
"title": ""
},
{
"docid": "9490f117f153a16152237a5a6b08c0a3",
"text": "Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy.",
"title": ""
},
{
"docid": "bba15d88edc2574dcb3b12a78c3b2d57",
"text": "Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higherorder probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) Robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each.",
"title": ""
},
{
"docid": "c20733b414a1b39122ef54d161885d81",
"text": "This paper discusses the role of clusters and focal firms in the economic performance of small firms in Italy. Using the example of the packaging industry of northern Italy, it shows how clusters of small firms have emerged around a few focal or leading companies. These companies have helped the clusters grow and diversify through technological and managerial spillover effects, through the provision of purchase orders, and sometimes through financial links. The role of common local training institutes, whose graduates often start up small firms within the local cluster, is also discussed.",
"title": ""
},
{
"docid": "972ef2897c352ad384333dd88588f0e6",
"text": "We describe a vision-based obstacle avoidance system for of f-road mobile robots. The system is trained from end to end to map raw in put images to steering angles. It is trained in supervised mode t predict the steering angles provided by a human driver during training r uns collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two f orwardpointing wireless color cameras. A remote computer process es the video and controls the robot via radio. The learning system is a lar ge 6-layer convolutional network whose input is a single left/right pa ir of unprocessed low-resolution images. The robot exhibits an excell ent ability to detect obstacles and navigate around them in real time at spe ed of 2 m/s.",
"title": ""
},
{
"docid": "288f32db8af5789e6e6049fa4cec0334",
"text": "Trusted execution environments, and particularly the Software Guard eXtensions (SGX) included in recent Intel x86 processors, gained significant traction in recent years. A long track of research papers, and increasingly also realworld industry applications, take advantage of the strong hardware-enforced confidentiality and integrity guarantees provided by Intel SGX. Ultimately, enclaved execution holds the compelling potential of securely offloading sensitive computations to untrusted remote platforms. We present Foreshadow, a practical software-only microarchitectural attack that decisively dismantles the security objectives of current SGX implementations. Crucially, unlike previous SGX attacks, we do not make any assumptions on the victim enclave’s code and do not necessarily require kernel-level access. At its core, Foreshadow abuses a speculative execution bug in modern Intel processors, on top of which we develop a novel exploitation methodology to reliably leak plaintext enclave secrets from the CPU cache. We demonstrate our attacks by extracting full cryptographic keys from Intel’s vetted architectural enclaves, and validate their correctness by launching rogue production enclaves and forging arbitrary local and remote attestation responses. The extracted remote attestation keys affect millions of devices.",
"title": ""
},
{
"docid": "eb22a8448b82f6915850fe4d60440b3b",
"text": "In story-based games or other interactive systems, a drama manager (DM) is an omniscient agent that acts to bring about a particular sequence of plot points for the player to experience. Traditionally, the DM's narrative evaluation criteria are solely derived from a human designer. We present a DM that learns a model of the player's storytelling preferences and automatically recommends a narrative experience that is predicted to optimize the player's experience while conforming to the human designer's storytelling intentions. Our DM is also capable of manipulating the space of narrative trajectories such that the player is more likely to make choices that result in the recommended experience. Our DM uses a novel algorithm, called prefix-based collaborative filtering (PBCF), that solves the sequential recommendation problem to find a sequence of plot points that maximizes the player's rating of his or her experience. We evaluate our DM in an interactive storytelling environment based on choose-your-own-adventure novels. Our experiments show that our algorithms can improve the player's experience over the designer's storytelling intentions alone and can deliver more personalized experiences than other interactive narrative systems while preserving players' agency.",
"title": ""
},
{
"docid": "6172f0048a770cadc0220c3cf1ff5e2b",
"text": "The interpretation of the resource-conflict link that has become most publicized—the rebel greed hypothesis—depends on just one of many plausible mechanisms that could underlie a relationship between resource dependence and violence. The author catalogues a large range of rival possible mechanisms, highlights a set of techniques that may be used to identify these mechanisms, and begins to employ these techniques to distinguish between rival accounts of the resource-conflict linkages. The author uses finer natural resource data than has been used in the past, gathering and presenting new data on oil and diamonds production and on oil stocks. The author finds evidence that (1) conflict onset is more responsive to the impacts of past natural resource production than to the potential for future production, supporting a weak states mechanism rather than a rebel greed mechanism; (2) the impact of natural resources on conflict cannot easily be attributed entirely to the weak states mechanism, and in particular, the impact of natural resources is independent of state strength; (3) the link between primary commodities and conflict is driven in part by agricultural dependence rather than by natural resources more narrowly defined, a finding consistent with a “sparse networks” mechanism; (4) natural resources are associated with shorter wars, and natural resource wars are more likely to end with military victory for one side than other wars. This is consistent with evidence that external actors have incentives to work to bring wars to a close when natural resource supplies are threatened. The author finds no evidence that resources are associated with particular difficulties in negotiating ends to conflicts, contrary to arguments that loot-seeking rebels aim to prolong wars.",
"title": ""
},
{
"docid": "df487337795d03d8538024aedacbbbe9",
"text": "This study aims to make an inquiry regarding the advantages and challenges of integrating augmented reality (AR) into the library orientation programs of academic/research libraries. With the vast number of emerging technologies that are currently being introduced to the library world, it is essential for academic librarians to fully utilize these technologies to their advantage. However, it is also of equal importance for them to first make careful analysis and research before deciding whether to adopt a certain technology or not. AR offers a strategic medium through which librarians can attach digital information to real-world objects and simply let patrons interact with them. It is a channel that librarians can utilize in order to disseminate information and guide patrons in their studies or researches. And while it is expected for AR to grow tremendously in the next few years, it becomes more inevitable for academic librarians to acquire related IT skills in order to further improve the services they offer in their respective colleges and universities. The study shall employ the pragmatic approach to research, conducting an extensive review of available literature on AR as used in academic libraries, designing a prototype to illustrate how AR can be integrated to an existing library orientation program, and performing surveys and interviews on patrons and librarians who used it. This study can serve as a guide in order for academic librarians to assess whether implementing AR in their respective libraries will be beneficial to them or not.",
"title": ""
},
{
"docid": "5dad207fe80469fe2b80d1f1e967575e",
"text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.",
"title": ""
},
{
"docid": "9433fc835573173c38598517a0fac87c",
"text": "Recommendation and review sites offer a wealth of information beyond ratings. For instance, on IMDb users leave reviews, commenting on different aspects of a movie (e.g. actors, plot, visual effects), and expressing their sentiments (positive or negative) on these aspects in their reviews. This suggests that uncovering aspects and sentiments will allow us to gain a better understanding of users, movies, and the process involved in generating ratings.\n The ability to answer questions such as \"Does this user care more about the plot or about the special effects?\" or \"What is the quality of the movie in terms of acting?\" helps us to understand why certain ratings are generated. This can be used to provide more meaningful recommendations.\n In this work we propose a probabilistic model based on collaborative filtering and topic modeling. It allows us to capture the interest distribution of users and the content distribution for movies; it provides a link between interest and relevance on a per-aspect basis and it allows us to differentiate between positive and negative sentiments on a per-aspect basis. Unlike prior work our approach is entirely unsupervised and does not require knowledge of the aspect specific ratings or genres for inference.\n We evaluate our model on a live copy crawled from IMDb. Our model offers superior performance by joint modeling. Moreover, we are able to address the cold start problem -- by utilizing the information inherent in reviews our model demonstrates improvement for new users and movies.",
"title": ""
},
{
"docid": "a1ccca52f1563a2e208afcaa37e209d1",
"text": "BACKGROUND\nImplicit biases involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender. This review examines the evidence that healthcare professionals display implicit biases towards patients.\n\n\nMETHODS\nPubMed, PsychINFO, PsychARTICLE and CINAHL were searched for peer-reviewed articles published between 1st March 2003 and 31st March 2013. Two reviewers assessed the eligibility of the identified papers based on precise content and quality criteria. The references of eligible papers were examined to identify further eligible studies.\n\n\nRESULTS\nForty two articles were identified as eligible. Seventeen used an implicit measure (Implicit Association Test in fifteen and subliminal priming in two), to test the biases of healthcare professionals. Twenty five articles employed a between-subjects design, using vignettes to examine the influence of patient characteristics on healthcare professionals' attitudes, diagnoses, and treatment decisions. The second method was included although it does not isolate implicit attitudes because it is recognised by psychologists who specialise in implicit cognition as a way of detecting the possible presence of implicit bias. Twenty seven studies examined racial/ethnic biases; ten other biases were investigated, including gender, age and weight. Thirty five articles found evidence of implicit bias in healthcare professionals; all the studies that investigated correlations found a significant positive relationship between level of implicit bias and lower quality of care.\n\n\nDISCUSSION\nThe evidence indicates that healthcare professionals exhibit the same levels of implicit bias as the wider population. The interactions between multiple patient characteristics and between healthcare professional and patient characteristics reveal the complexity of the phenomenon of implicit bias and its influence on clinician-patient interaction. The most convincing studies from our review are those that combine the IAT and a method measuring the quality of treatment in the actual world. Correlational evidence indicates that biases are likely to influence diagnosis and treatment decisions and levels of care in some circumstances and need to be further investigated. Our review also indicates that there may sometimes be a gap between the norm of impartiality and the extent to which it is embraced by healthcare professionals for some of the tested characteristics.\n\n\nCONCLUSIONS\nOur findings highlight the need for the healthcare profession to address the role of implicit biases in disparities in healthcare. More research in actual care settings and a greater homogeneity in methods employed to test implicit biases in healthcare is needed.",
"title": ""
},
{
"docid": "2bc6775efec2b59ad35b9f4841c7f3cf",
"text": "Cryptographic schemes for computing on encrypted data promise to be a fundamental building block of cryptography. The way one models such algorithms has a crucial effect on the efficiency and usefulness of the resulting cryptographic schemes. As of today, almost all known schemes for fully homomorphic encryption, functional encryption, and garbling schemes work by modeling algorithms as circuits rather than as Turing machines. As a consequence of this modeling, evaluating an algorithm over encrypted data is as slow as the worst-case running time of that algorithm, a dire fact for many tasks. In addition, in settings where an evaluator needs a description of the algorithm itself in some “encoded” form, the cost of computing and communicating such encoding is as large as the worst-case running time of this algorithm. In this work, we construct cryptographic schemes for computing Turing machines on encrypted data that avoid the worst-case problem. Specifically, we show: – An attribute-based encryption scheme for any polynomial-time Turing machine and Random Access Machine (RAM). – A (single-key and succinct) functional encryption scheme for any polynomialtime Turing machine. – A reusable garbling scheme for any polynomial-time Turing machine. These three schemes have the property that the size of a key or of a garbling for a Turing machine is very short: it depends only on the description of the Turing machine and not on its running time. Previously, the only existing constructions of such schemes were for depth-d circuits, where all the parameters grow with d. Our constructions remove this depth d restriction, have short keys, and moreover, avoid the worst-case running time. – A variant of fully homomorphic encryption scheme for Turing machines, where one can evaluate a Turing machine M on an encrypted input x in time that is dependent on the running time of M on input x as opposed to the worst-case runtime of M . Previously, such a result was known only for a restricted class of Turing machines and it required an expensive preprocessing phase (with worst-case runtime); our constructions remove both restrictions. Our results are obtained via a reduction from SNARKs (Bitanski et al) and an “extractable” variant of witness encryption, a scheme introduced by Garg et al.. We prove that the new assumption is secure in the generic group model. We also point out the connection between (the variant of) witness encryption and the obfuscation of point filter functions as defined by Goldwasser and Kalai in 2005.",
"title": ""
},
{
"docid": "3ec63f1c1f74c5d11eaa9d360ceaac55",
"text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.",
"title": ""
},
{
"docid": "9ce08ed9e7e34ef1f5f12bfbe54e50ea",
"text": "GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to \"fill\" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime.\n In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.",
"title": ""
},
{
"docid": "3d8f937692b9c0e2bb2c5b0148e1ef2c",
"text": "BACKGROUND\nAttenuated peripheral perfusion in patients with advanced chronic heart failure (CHF) is partially the result of endothelial dysfunction. This has been causally linked to an impaired endogenous regenerative capacity of circulating progenitor cells (CPC). The aim of this study was to elucidate whether exercise training (ET) affects exercise intolerance and left ventricular (LV) performance in patients with advanced CHF (New York Heart Association class IIIb) and whether this is associated with correction of peripheral vasomotion and induction of endogenous regeneration.\n\n\nMETHODS AND RESULTS\nThirty-seven patients with CHF (LV ejection fraction 24+/-2%) were randomly assigned to 12 weeks of ET or sedentary lifestyle (control). At the beginning of the study and after 12 weeks, maximal oxygen consumption (Vo(2)max) and LV ejection fraction were determined; the number of CD34(+)/KDR(+) CPCs was quantified by flow cytometry and CPC functional capacity was determined by migration assay. Flow-mediated dilation was assessed by ultrasound. Capillary density was measured in skeletal muscle tissue samples. In advanced CHF, ET improved Vo(2)max by +2.7+/-2.2 versus -0.8+/-3.1 mL/min/kg in control (P=0.009) and LV ejection fraction by +9.4+/-6.1 versus -0.8+/-5.2% in control (P<0.001). Flow-mediated dilation improved by +7.43+/-2.28 versus +0.09+/-2.18% in control (P<0.001). ET increased the number of CPC by +83+/-60 versus -6+/-109 cells/mL in control (P=0.014) and their migratory capacity by +224+/-263 versus -12+/-159 CPC/1000 plated CPC in control (P=0.03). Skeletal muscle capillary density increased by +0.22+/-0.10 versus -0.02+/-0.16 capillaries per fiber in control (P<0.001).\n\n\nCONCLUSIONS\nTwelve weeks of ET in patients with advanced CHF is associated with augmented regenerative capacity of CPCs, enhanced flow-mediated dilation suggestive of improvement in endothelial function, skeletal muscle neovascularization, and improved LV function. Clinical Trial Registration- http://www.clinicaltrials.gov. Unique Identifier: NCT00176384.",
"title": ""
},
{
"docid": "1efcace33a3a6ad7805f765edfafb6f4",
"text": "Recently, new configurations of robot legs using a parallel mechanism have been studied for improving the locomotion ability in four-legged robots. However, it is difficult to obtain full dynamics of the parallel-mechanism robot legs because this mechanism has many links and complex constraint conditions, which make it difficult to design a modelbased controller. Here, we propose the simplified modeling of a parallel-mechanism robot leg with two degrees-of-freedom (2DOF), which can be used instead of complex full dynamics for model-based control. The new modeling approach considers the robot leg as a 2DOF Revolute and Prismatic(RP) manipulator, inspired by the actuation mechanism of robot legs, for easily designing a nominal model of the controller. To verify the effectiveness of the new modeling approach experimentally, we conducted dynamic simulations using a commercial multi-dynamics simulator. The simulation results confirmed that the proposed modeling approach could be an alternative modeling method for parallel-mechanism robot legs.",
"title": ""
},
{
"docid": "b5e539774c408232797da1f35abcca90",
"text": "The discrete Laplace-Beltrami operator plays a prominent role in many Digital Geometry Processing applications ranging from denoising to parameterization, editing, and physical simulation. The standard discretization uses the cotangents of the angles in the immersed mesh which leads to a variety of numerical problems. We advocate use of the intrinsic Laplace-Beltrami operator. It satis- fies a local maximum principle, guaranteeing, e.g., that no flipped triangles can occur in parameterizations. It also leads to better conditioned linear systems. The intrinsic Laplace-Beltrami operator is based on an intrinsic Delaunay triangulation of the surface. We give an incremental algorithm to construct such triangulations together with an overlay structure which captures the relationship between the extrinsic and intrinsic triangulations. Using a variety of example meshes we demonstrate the numerical benefits of the intrinsic Laplace-Beltrami operator.",
"title": ""
}
] |
scidocsrr
|
d21f13f980aca50083e9f6cf16cfa8c9
|
Split and Match: Example-Based Adaptive Patch Sampling for Unsupervised Style Transfer
|
[
{
"docid": "a65d1881f5869f35844064d38b684ac8",
"text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.",
"title": ""
}
] |
[
{
"docid": "27029a5e18e5d874606a87f0d238cd14",
"text": "User behavior provides many cues to improve the relevance of search results through personalization. One aspect of user behavior that provides especially strong signals for delivering better relevance is an individual's history of queries and clicked documents. Previous studies have explored how short-term behavior or long-term behavior can be predictive of relevance. Ours is the first study to assess how short-term (session) behavior and long-term (historic) behavior interact, and how each may be used in isolation or in combination to optimally contribute to gains in relevance through search personalization. Our key findings include: historic behavior provides substantial benefits at the start of a search session; short-term session behavior contributes the majority of gains in an extended search session; and the combination of session and historic behavior out-performs using either alone. We also characterize how the relative contribution of each model changes throughout the duration of a session. Our findings have implications for the design of search systems that leverage user behavior to personalize the search experience.",
"title": ""
},
{
"docid": "88cc4e08c8e818f1928c96ad47ef3502",
"text": "This paper addresses the problem of multi-object tracking in complex scenes by a single, static, uncalibrated camera. Tracking-by-detection is a widely used approach for multi-object tracking. Challenges still remain in complex scenes, however, when this approach has to deal with occlusions, unreliable detections (e.g., inaccurate position/size, false positives, or false negatives), and sudden object motion/appearance changes, among other issues. To handle these problems, this paper presents a novel online multi-object tracking method, which can be fully applied to real-time applications. First, an object tracking process based on frame-by-frame association with a novel affinity model and an appearance update that does not rely on online learning is proposed to effectively and rapidly assign detections to tracks. Second, a two-stage drift handling method with novel track confidence is proposed to correct drifting tracks caused by the abrupt motion change of objects under occlusion and prolonged inaccurate detections. In addition, a fragmentation handling method based on a track-to-track association is proposed to solve the problem in which an object trajectory is broken into several tracks due to long-term occlusions. Based on experimental results derived from challenging public data sets, the proposed method delivers an impressive performance compared with other state-of-the-art methods. Furthermore, additional performance analysis demonstrates the effect and usefulness of each component of the proposed method.",
"title": ""
},
{
"docid": "27269c9f6eaca70be49461b57b3c2e2f",
"text": "Analytical prediction of oxidative stress biomarkers in ecosystem provides an expressive result for many stressors. These oxidative stress biomarkers including superoxide dismutase, glutathione peroxidase and catalase activity in fish liver tissue were analyzed within feeding different levels of selenium nanoparticles. Se-nanoparticles represent a salient defense mechanism in oxidative stress within certain limits; however, stress can be engendered from toxic levels of these nanoparticles. For instance, prediction of the level of pollution and/or stressors was elucidated to be improved with different levels of selenium nanoparticles using the bio-inspired Sine-Cosine algorithm (SCA). In this paper, we improved the prediction accuracy of liver enzymes of fish fed by nano-selenite by developing a neural network model based on SCA, that can train and update the weights and the biases of the network until reaching the optimum value. The performance of the proposed model is better and achieved more efficient than other models.",
"title": ""
},
{
"docid": "5e23bcd2f5bc996525056093c8e47e14",
"text": "No matter how mild, dehydration is not a desirable condition because there is an imbalance in the homeostatic function of the internal environment. This can adversely affect cognitive performance, not only in groups more vulnerable to dehydration, such as children and the elderly, but also in young adults. However, few studies have examined the impact of mild or moderate dehydration on cognitive performance. This paper reviews the principal findings from studies published to date examining cognitive skills. Being dehydrated by just 2% impairs performance in tasks that require attention, psychomotor, and immediate memory skills, as well as assessment of the subjective state. In contrast, the performance of long-term and working memory tasks and executive functions is more preserved, especially if the cause of dehydration is moderate physical exercise. The lack of consistency in the evidence published to date is largely due to the different methodology applied, and an attempt should be made to standardize methods for future studies. These differences relate to the assessment of cognitive performance, the method used to cause dehydration, and the characteristics of the participants.",
"title": ""
},
{
"docid": "62f8eb0e7eafe1c0d857dadc72008684",
"text": "In the current Web 2.0 era, the popularity of Web resources fluctuates ephemerally, based on trends and social interest. As a result, content-based relevance signals are insufficient to meet users' constantly evolving information needs in searching for Web 2.0 items. Incorporating future popularity into ranking is one way to counter this. However, predicting popularity as a third party (as in the case of general search engines) is difficult in practice, due to their limited access to item view histories. To enable popularity prediction externally without excessive crawling, we propose an alternative solution by leveraging user comments, which are more accessible than view counts. Due to the sparsity of comments, traditional solutions that are solely based on view histories do not perform well. To deal with this sparsity, we mine comments to recover additional signal, such as social influence. By modeling comments as a time-aware bipartite graph, we propose a regularization-based ranking algorithm that accounts for temporal, social influence and current popularity factors to predict the future popularity of items. Experimental results on three real-world datasets --- crawled from YouTube, Flickr and Last.fm --- show that our method consistently outperforms competitive baselines in several evaluation tasks.",
"title": ""
},
{
"docid": "c1f17055249341dd6496fce9a2703b18",
"text": "With systems performing Simultaneous Localization And Mapping (SLAM) from a single robot reaching considerable maturity, the possibility of employing a team of robots to collaboratively perform a task has been attracting increasing interest. Promising great impact in a plethora of tasks ranging from industrial inspection to digitization of archaeological structures, collaborative scene perception and mapping are key in efficient and effective estimation. In this paper, we propose a novel, centralized architecture for collaborative monocular SLAM employing multiple small Unmanned Aerial Vehicles (UAVs) to act as agents. Each agent is able to independently explore the environment running limited-memory SLAM onboard, while sending all collected information to a central server, a ground station with increased computational resources. The server manages the maps of all agents, triggering loop closure, map fusion, optimization and distribution of information back to the agents. This allows an agent to incorporate observations from others in its SLAM estimates on the fly. We put the proposed framework to the test employing a nominal keyframe-based monocular SLAM algorithm, demonstrating the applicability of this system in multi-UAV scenarios.",
"title": ""
},
{
"docid": "2bc30693be1c5855a9410fb453128054",
"text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"title": ""
},
{
"docid": "f88b686c82ed883b5b271900a809f6c1",
"text": "I believe that four advancements are necessary to achieve that aim. Methods for integrating diverse algorithms seamlessly into big-data architectures need to be found. Software development and archiving should be brought together under one roof. Data reading must become automated among formats. Ultimately, the interpretation of vast streams of scientific data will require a new breed of researcher equally familiar with science and advanced computing.",
"title": ""
},
{
"docid": "d4f3dc5efe166df222b2a617d5fbd5e4",
"text": "IKEA is the largest furniture retailer in the world. Their critical success factor is that IKEA can seamlessly integrate and optimize end-to-end supply chain to maximize customer value, eventually build their dominate position in entire value chain. This article summarizes and analyzes IKEA's successful practices of value chain management. Hopefully it can be a good reference or provide strategic insight for Chinese enterprises.",
"title": ""
},
{
"docid": "5f70d96454e4a6b8d2ce63bc73c0765f",
"text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.",
"title": ""
},
{
"docid": "4a9debbbe5b21adcdb50bfdc0c81873c",
"text": "Stealth Dicing (SD) technology has high potential to replace the conventional blade sawing and laser grooving. The dicing method has been widely researched since 2005 [1-3] especially for thin wafer (⇐ 12 mils). SD cutting has good quality because it has dry process during laser cutting, extremely narrow scribe line and multi-die sawing capability. However, along with complicated package technology, the chip quality demands fine and accurate pitch which conventional blade saw is impossible to achieve. This paper is intended as an investigation in high performance SD sawing, including multi-pattern wafer and DAF dicing tape capability. With the improvement of low-K substrate technology and min chip scale size, SD cutting is more important than other methods used before. Such sawing quality also occurs in wafer level chip scale package. With low-K substrate and small package, the SD cutting method can cut the narrow scribe line easily (15 um), which can lead the WLCSP to achieve more complicated packing method successfully.",
"title": ""
},
{
"docid": "ac843bd6a18025bb2cac3002dfb6f811",
"text": "For more efficient photoelectrochemical water splitting, there is a dilemma that a photoelectrode needs both light absorption and electrocatalytic faradaic reaction. One of the promising strategies is to deposit a pattern of electrocatalysts onto a semiconductor surface, leaving sufficient bare surface for light absorption while minimizing concentration overpotential as well as resistive loss at the ultramicroelectrodes for faradaic reaction. This scheme can be successfully realized by \"maskless\" direct photoelectrochemical patterning of electrocatalyst onto an SiOx/amorphous Si (a-Si) surface by the light-guided electrodeposition technique. Electrochemical impedance spectroscopy at various pHs tells us much about how it works. The surface states at the SiOx/a-Si interface can mediate the photogenerated electrons for hydrogen evolution, whereas electroactive species in the solution undergo outer-sphere electron transfer, taking electrons tunneling across the SiOx layer from the conduction band. In addition to previously reported long-distance lateral electron transport behavior at a patterned catalyst/SiOx/a-Si interface, the charging process of the surface states plays a crucial role in proton reduction, leading to deeper understanding of the operation mechanisms for photoelectrochemical water splitting.",
"title": ""
},
{
"docid": "799a853f207c8c70abe1cb46b1513070",
"text": "Transfer functions for the reference clock jitter in a serial link such as the PCI express 100 MHz reference clock are established for various clock and data recovery circuits (CDRCs). In addition, mathematical interrelationships between phase, period, and cycle-to-cycle jitter are established and phase jitter is used with the jitter transfer function. Numerical simulations are carried out for these transfer functions. Relevant eye-closure/total jitter at a certain bit error rate (BER) level for the receiver is estimated by applying these jitter transfer functions to the measured phase jitter of the reference clock over a range of transfer function parameters. Implications of this new development to serial link reference clock testing and specification formulation are discussed.",
"title": ""
},
{
"docid": "60a977556ad78d2e955f750bc4a98707",
"text": "We propose a novel technique for faster Neural Network (NN) training by systematically approximating all the constituent matrix multiplications and convolutions. This approach is complementary to other approximation techniques, requires no changes to the dimensions of the network layers, hence compatible with existing training frameworks. We first analyze the applicability of the existing methods for approximating matrix multiplication to NN training, and extend the most suitable column-row sampling algorithm to approximating multi-channel convolutions. We apply approximate tensor operations to training MLP, CNN and LSTM network architectures on MNIST, CIFAR-100 and Penn Tree Bank datasets and demonstrate 30%-80% reduction in the amount of computations while maintaining little or no impact on the test accuracy. Our promising results encourage further study of general methods for approximating tensor operations and their application to NN training.",
"title": ""
},
{
"docid": "4426848fbae6fdabdb969768254f2cb1",
"text": "This paper presents a multimodal information presentation method for a basic dance training system. The system targets on beginners and enables them to learn basics of dances easily. One of the most effective ways of learning dances is to watch a video showing the performance of dance masters. However, some information cannot be conveyed well through video. One is the translational motion, especially that in the depth direction. We cannot tell exactly how far does the dancers move forward or backward. Another is the timing information. Although we can tell how to move our arms or legs from video, it is difficult to know when to start moving them. We solve the first issue by introducing an image display on a mobile robot. We can learn the amount of translation just by following the robot. We introduce active devices for the second issue. The active devices are composed of some vibro-motors and are developed to direct action-starting cues with vibration. Experimental results show the effectiveness of our multimodal information presentation method.",
"title": ""
},
{
"docid": "acd458070c613d23618ccb9b4620da56",
"text": "The Intelligent vehicle (IV) is experiencing revolutionary growth in research and industry, but it still suffers from many security vulnerabilities. Traditional security methods are incapable to provide secure IV communication. The major issues in IV communication, are trust, data accuracy and reliability of communication data in the communication channel. Blockchain technology works for the crypto currency, Bit-coin, which is recently used to build trust and reliability in peer-topeer networks having similar topologies as IV Communication. In this paper, we are proposing, Intelligent Vehicle-Trust Point (IV-TP) mechanism for IV communication among IVs using Blockchain technology. The IVs communicated data provides security and reliability using our proposed IV-TP. Our IV-TP mechanism provides trustworthiness for vehicles behavior, and vehicles legal and illegal action. Our proposal presents a reward based system, an exchange of some IV-TP among IVs, during successful communication. For the data management of the IVTP, we are using blockchain technology in the intelligent transportation system (ITS), which stores all IV-TP details of every vehicle and is accessed ubiquitously by IVs. In this paper, we evaluate our proposal with the help of intersection use case scenario for intelligent vehicles communication. Keywords— Blockchain, intelligent vehicles, security, component; ITS",
"title": ""
},
{
"docid": "941df83e65700bc2e5ee7226b96e4f54",
"text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.",
"title": ""
},
{
"docid": "17162eac4f1292e4c2ad7ef83af803f1",
"text": "Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen “robustly”: commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.",
"title": ""
},
{
"docid": "18288c42186b7fec24a5884454e69989",
"text": "This article addresses the problem of multichannel audio source separation. We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique. We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura-Saito divergence, and also Kullback-Leibler, Cauchy, mean squared error, and phase-sensitive cost functions. We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration. Finally, we present its application to a speech enhancement problem. The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm.",
"title": ""
},
{
"docid": "ee06da579046d0ad1a83aa90784d8b0c",
"text": "Compassion is a positive orientation towards suffering that may be enhanced through compassion training and is thought to influence psychological functioning. However, the effects of compassion training on mindfulness, affect, and emotion regulation are not known. We conducted a randomized controlled trial in which 100 adults from the community were randomly assigned to either a 9-week compassion cultivation training (CCT) or a waitlist (WL) control condition. Participants completed self-report inventories that measured mindfulness, positive and negative affect, and emotion regulation. Compared to WL, CCT resulted in increased mindfulness and happiness, as well as decreased worry and emotional suppression. Within CCT, the amount of formal meditation practiced was related to reductions in worry and emotional suppression. These findings suggest that compassion cultivation training effects cognitive and emotion factors that support psychological flexible and adaptive functioning.",
"title": ""
}
] |
scidocsrr
|
78dd1b754c4a524a925218d1e4558aa9
|
Improving the accuracy of top-N recommendation using a preference model
|
[
{
"docid": "61894c629843db4dc849bcf5a77839f6",
"text": "Recommendations from the long tail of the popularity distribution of items are generally considered to be particularly valuable. On the other hand, recommendation accuracy tends to decrease towards the long tail. In this paper, we quantitatively examine this trade-off between item popularity and recommendation accuracy. To this end, we assume that there is a selection bias towards popular items in the available data. This allows us to define a new accuracy measure that can be gradually tuned towards the long tail. We show that, under this assumption, this measure has the desirable property of providing nearly unbiased estimates concerning recommendation accuracy. In turn, this also motivates a refinement for training collaborative-filtering approaches. In various experiments with real-world data, including a user study, empirical evidence suggests that only a small, if any, bias of the recommendations towards less popular items is appreciated by users.",
"title": ""
},
{
"docid": "8dc130466a3ab4f9b932fdc5a0a9e991",
"text": "MyMediaLite is a fast and scalable, multi-purpose library of recommender system algorithms, aimed both at recommender system researchers and practitioners. It addresses two common scenarios in collaborative filtering: rating prediction (e.g. on a scale of 1 to 5 stars) and item prediction from positive-only implicit feedback (e.g. from clicks or purchase actions). The library offers state-of-the-art algorithms for those two tasks. Programs that expose most of the library's functionality, plus a GUI demo, are included in the package. Efficient data structures and a common API are used by the implemented algorithms, and may be used to implement further algorithms. The API also contains methods for real-time updates and loading/storing of already trained recommender models.\n MyMediaLite is free/open source software, distributed under the terms of the GNU General Public License (GPL). Its methods have been used in four different industrial field trials of the MyMedia project, including one trial involving over 50,000 households.",
"title": ""
}
] |
[
{
"docid": "79729b8f7532617015cbbdc15a876a5c",
"text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.",
"title": ""
},
{
"docid": "c6739c19b24deef9efcb3da866b9ddbc",
"text": "Market makers have to continuously set bid and ask quotes for the stocks they have under consideration. Hence they face a complex optimization problem in which their return, based on the bid-ask spread they quote and the frequency they indeed provide liquidity, is challenged by the price risk they bear due to their inventory. In this paper, we provide optimal bid and ask quotes and closed-form approximations are derived using spectral arguments.",
"title": ""
},
{
"docid": "75d9b0e67b57a8be7675854b19b50915",
"text": "In the paper, we describe analysis of Vivaldi antenna array aimed for microwave image application and SAR application operating at Ka band. The antenna array is fed by a SIW feed network for its low insertion loss and broadband performances in millimeter wave range. In our proposal we have replaced the large feed network by a simple relatively broadband network of compact size to reduce the losses in substrate integrated waveguide (SIW) and save space on PCB. The feed network is power 8-way divider fed by a wideband SIW-GCPW transition and directly connected to the antenna elements. The final antenna array will be designed, fabricated and obtained measured results will be compared with numerical ones.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "85bec4c1332b324f4eb85a84647c6a95",
"text": "Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book and Addison-Wesley was aware of a trademark claim, the designations have been printed in initial capital letters. Aureet's memory is a blessing.",
"title": ""
},
{
"docid": "3aa35438449590f17163bda1c683c590",
"text": "Traditional barcode recognition algorithm usually do not fit the cylindrical code but the one on flat surface. This paper proposes a low-cost approach to implement recognition of the curved QR codes printed on bottles or cans. Finder patterns are extracted from detecting module width proportion and corners of contours and an efficient direct least-square ellipse fitting method is employed to extract the elliptic edge and the boundary of code region. Then the code is reconstructed by direct mapping from the stereoscopic coordinates to the image plane using the 3D back-projection, thus the data of code could be restored. Compared with previous approaches, the proposed algorithm outperforms in not only the computation amount but also higher accuracy of the barcode recognition, whether in the flat or the cylindrical surface.",
"title": ""
},
{
"docid": "105b179a6cb824f6edb04d703a9f42a8",
"text": "This paper is concerned with the problem of robust H∞ output feedback control for a class of continuous-time Takagi-Sugeno (T-S) fuzzy affine dynamic systems using quantized measurements. The objective is to design a suitable observer-based dynamic output feedback controller that guarantees the global stability of the resulting closed-loop fuzzy system with a prescribed H∞ disturbance attenuation level. Based on common/piecewise quadratic Lyapunov functions combined with S-procedure and some matrix inequality convexification techniques, some new results are developed to the controller synthesis for the underlying continuous-time T-S fuzzy affine systems with unmeasurable premise variables. All the solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, two simulation examples are provided to illustrate the advantages of the proposed approaches.",
"title": ""
},
{
"docid": "8ae0e3101adba2373fc44209c6e3b651",
"text": "Many predictive tasks of web applications need to model categorical variables, such as user IDs and demographics like genders and occupations. To apply standard machine learning techniques, these categorical predictors are always converted to a set of binary features via one-hot encoding, making the resultant feature vector highly sparse. To learn from such sparse data effectively, it is crucial to account for the interactions between features.\n Factorization Machines (FMs) are a popular solution for efficiently using the second-order feature interactions. However, FM models feature interactions in a linear way, which can be insufficient for capturing the non-linear and complex inherent structure of real-world data. While deep neural networks have recently been applied to learn non-linear feature interactions in industry, such as the Wide&Deep by Google and DeepCross by Microsoft, the deep structure meanwhile makes them difficult to train.\n In this paper, we propose a novel model Neural Factorization Machine (NFM) for prediction under sparse settings. NFM seamlessly combines the linearity of FM in modelling second-order feature interactions and the non-linearity of neural network in modelling higher-order feature interactions. Conceptually, NFM is more expressive than FM since FM can be seen as a special case of NFM without hidden layers. Empirical results on two regression tasks show that with one hidden layer only, NFM significantly outperforms FM with a 7.3% relative improvement. Compared to the recent deep learning methods Wide&Deep and DeepCross, our NFM uses a shallower structure but offers better performance, being much easier to train and tune in practice.",
"title": ""
},
{
"docid": "019e48981d451eed66ffcfbee8edddb0",
"text": "We consider open government (OG) within the context of e-government and its broader implications for the future of public administration. We argue that the current US Administration's Open Government Initiative blurs traditional distinctions between e-democracy and e-government by incorporating historically democratic practices, now enabled by emerging technology, within administrative agencies. We consider how transparency, participation, and collaboration function as democratic practices in administrative agencies, suggesting that these processes are instrumental attributes of administrative action and decision making, rather than the objective of administrative action, as they appear to be currently treated. We propose alternatively that planning and assessing OG be addressed within a \"public value\" framework. The creation of public value is the goal of public organizations; through public value, public organizations meet the needs and wishes of the public with respect to substantive benefits as well as the intrinsic value of better government. We extend this view to OG by using the framework as a way to describe the value produced when interaction between government and citizens becomes more transparent, participative, and collaborative, i.e., more democratic.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "a064a4b8e19068526e417643788d0b04",
"text": "Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.",
"title": ""
},
{
"docid": "c98d96d2263aa1c701accae83b451fca",
"text": "Cannabidiol (CBD), a major phytocannabinoid constituent of cannabis, is attracting growing attention in medicine for its anxiolytic, antipsychotic, antiemetic and anti-inflammatory properties. However, up to this point, a comprehensive literature review of the effects of CBD in humans is lacking. The aim of the present systematic review is to examine the randomized and crossover studies that administered CBD to healthy controls and to clinical patients. A systematic search was performed in the electronic databases PubMed and EMBASE using the key word \"cannabidiol\". Both monotherapy and combination studies (e.g., CBD + ∆9-THC) were included. A total of 34 studies were identified: 16 of these were experimental studies, conducted in healthy subjects, and 18 were conducted in clinical populations, including multiple sclerosis (six studies), schizophrenia and bipolar mania (four studies), social anxiety disorder (two studies), neuropathic and cancer pain (two studies), cancer anorexia (one study), Huntington's disease (one study), insomnia (one study), and epilepsy (one study). Experimental studies indicate that a high-dose of inhaled/intravenous CBD is required to inhibit the effects of a lower dose of ∆9-THC. Moreover, some experimental and clinical studies suggest that oral/oromucosal CBD may prolong and/or intensify ∆9-THC-induced effects, whereas others suggest that it may inhibit ∆9-THC-induced effects. Finally, preliminary clinical trials suggest that high-dose oral CBD (150-600 mg/d) may exert a therapeutic effect for social anxiety disorder, insomnia and epilepsy, but also that it may cause mental sedation. Potential pharmacokinetic and pharmacodynamic explanations for these results are discussed.",
"title": ""
},
{
"docid": "fc9699b4382b1ddc6f60fc6ec883a6d3",
"text": "Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components).\n Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.",
"title": ""
},
{
"docid": "28d573b9b32a8f95618a01f1e5e43a01",
"text": "When trying to satisfy an information need, smartphone users frequently transition from mobile search engines to mobile apps and vice versa. However, little is known about the nature of these transitions nor how mobile search and mobile apps interact. We report on a 2-week, mixed-method study involving 18 Android users, where we collected real-world mobile search and mobile app usage data alongside subjective insights on why certain interactions between apps and mobile search occur. Our results show that when people engage with mobile search they tend to interact with more mobile apps and for longer durations. We found that certain categories of apps are used more intensely alongside mobile search. Furthermore we found differences in app usage before and after mobile search and show how mobile app interactions can both prompt mobile search and enable users to take action. We conclude with a discussion on what these patterns mean for mobile search and how we might design mobile search experiences that take these app interactions into account.",
"title": ""
},
{
"docid": "34a6fe0c5183f19d4f25a99b3bcd205e",
"text": "In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.",
"title": ""
},
{
"docid": "6b78a4b493e67dc367710a0cbd9e313b",
"text": "The identification of glandular tissue in breast X-rays (mammograms) is important both in assessing asymmetry between left and right breasts, and in estimating the radiation risk associated with mammographic screening. The appearance of glandular tissue in mammograms is highly variable, ranging from sparse streaks to dense blobs. Fatty regions are generally smooth and dark. Texture analysis provides a flexible approach to discriminating between glandular and fatty regions. We have performed a series of experiments investigating the use of granulometry and texture energy to classify breast tissue. Results of automatic classifications have been compared with a consensus annotation provided by two expert breast radiologists. On a set of 40 mammograms, a correct classification rate of 80% has been achieved using texture energy analysis.",
"title": ""
},
{
"docid": "931a719037feac7a3addcdcf08312db3",
"text": "Automatic detection and recognition of road signs is an important component of automated driver assistance systems contributing to the safety of the drivers, pedestrians and vehicles. Despite significant research, the problem of detecting and recognizing road signs still remains challenging due to varying lighting conditions, complex backgrounds and different viewing angles. We present an effective and efficient method for detection and recognition of traffic signs from images. Detection is carried out by performing color based segmentation followed by application of Hough transform to find circles, triangles or rectangles. Recognition is carried out using three state-of-the-art feature matching techniques, SIFT, SURF and BRISK. The proposed system evaluated on a custom developed dataset reported promising detection and recognition results. A comparative analysis of the three descriptors reveal that while SIFT achieves the best recognition rates, BRISK is the most efficient of the three descriptors in terms of computation time.",
"title": ""
},
{
"docid": "b51f9ac729241f626d6ee38125912f5d",
"text": "INTRODUCTION\nFor many patients with gender dysphoria, gender-confirmation surgery (GCS) helps align their physical characteristics with their gender identity and is a fundamental element of comprehensive treatment. This article is the 2nd in a 3-part series about the treatment of gender dysphoria. Non-operative management was covered in part 1. This section begins broadly by reviewing surgical eligibility criteria, benefits of GCS, and factors associated with regret for transgender men and women. Then, the scope narrows to focus on aspects of feminizing genital GCS, including a discussion of vaginoplasty techniques, complications, and sexual function outcomes. Part 3 features operative considerations for masculinizing genital GCS.\n\n\nAIM\nTo summarize the World Professional Association for Transgender Health's (WPATH) surgical eligibility criteria and describe how patients with gender dysphoria benefit from GCS, provide an overview of genital and non-genital feminizing gender-confirmation procedures, and review vaginoplasty techniques, preoperative considerations, complications, and outcomes.\n\n\nMETHODS\nA review of relevant literature through April 2017 was performed using PubMed.\n\n\nMAIN OUTCOME MEASURES\nReview of literature related to surgical eligibility criteria for GCS, benefits of GCS, and surgical considerations for feminizing genitoplasty.\n\n\nRESULTS\nMost transgender men and women who satisfy WPATH eligibility criteria experience improved quality of life, overall happiness, and sexual function after GCS; regret is rare. Penile inversion vaginoplasty is the preferred technique for feminizing genital GCS according to most surgeons, including the authors whose surgical technique is described. Intestinal vaginoplasty is reserved for certain scenarios. After vaginoplasty most patients report overall high satisfaction with their sexual function even when complications occur, because most are minor and easily treatable.\n\n\nCONCLUSION\nGCS alleviates gender dysphoria for appropriately selected transgender men and women. Preoperative, intraoperative, and postoperative considerations of feminizing genital gender-confirmation procedures were reviewed. Hadj-Moussa M, Ohl DA, Kuzon WM. Feminizing Genital Gender-Confirmation Surgery. Sex Med Rev 2018;6:457-468.",
"title": ""
},
{
"docid": "be426354d0338b2b5a17503d30c9665c",
"text": "0141-9331/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.micpro.2011.06.002 ⇑ Corresponding author. E-mail address: jmanikandan.nitt@gmail.com (J. M In this paper, Texas Instruments TMS320C6713 DSP based real-time speech recognition system using Modified One Against All Support Vector Machine (SVM) classifier is proposed. The major contributions of this paper are: the study and evaluation of the performance of the classifier using three feature extraction techniques and proposal for minimizing the computation time for the classifier. From this study, it is found that the recognition accuracies of 93.33%, 98.67% and 96.67% are achieved for the classifier using Mel Frequency Cepstral Coefficients (MFCC) features, zerocrossing (ZC) and zerocrossing with peak amplitude (ZCPA) features respectively. To reduce the computation time required for the systems, two techniques – one using optimum threshold technique for the SVM classifier and another using linear assembly are proposed. The ZC based system requires the least computation time and the above techniques reduce the execution time by a factor of 6.56 and 5.95 respectively. For the purpose of comparison, the speech recognition system is also implemented using Altera Cyclone II FPGA with Nios II soft processor and custom instructions. Of the two approaches, the DSP approach requires 87.40% less number of clock cycles. Custom design of the recognition system on the FPGA without using the soft-core processor would have resulted in less computational complexity. The proposed classifier is also found to reduce the number of support vectors by a factor of 1.12–3.73 when applied to speaker identification and isolated letter recognition problems. The techniques proposed here can be adapted for various other SVM based pattern recognition systems. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c425efc86d67fdd6cbeee1dcf0e10ad5",
"text": "Traffic scene perception (TSP) aims to extract accurate real-time on-road environment information, which involves three phases: detection of objects of interest, recognition of detected objects, and tracking of objects in motion. Since recognition and tracking often rely on the results from detection, the ability to detect objects of interest effectively plays a crucial role in TSP. In this paper, we focus on three important classes of objects: traffic signs, cars, and cyclists. We propose to detect all the three important objects in a single learning-based detection framework. The proposed framework consists of a dense feature extractor and detectors of three important classes. Once the dense features have been extracted, these features are shared with all detectors. The advantage of using one common framework is that the detection speed is much faster, since all dense features need only to be evaluated once in the testing phase. In contrast, most previous works have designed specific detectors using different features for each of these three classes. To enhance the feature robustness to noises and image deformations, we introduce spatially pooled features as a part of aggregated channel features. In order to further improve the generalization performance, we propose an object subcategorization method as a means of capturing the intraclass variation of objects. We experimentally demonstrate the effectiveness and efficiency of the proposed framework in three detection applications: traffic sign detection, car detection, and cyclist detection. The proposed framework achieves the competitive performance with state-of-the-art approaches on several benchmark data sets.",
"title": ""
}
] |
scidocsrr
|
35ee39bf4c59a43bdd421c8c938c0de3
|
Geometrical description and structural analysis of a modular timber structure
|
[
{
"docid": "8d9fef4de18e4b84db3ae0ae684a3a1d",
"text": "Seven form-finding methods for tensegrity structures are reviewed and classified. The three kinematical methods include an analytical approach, a non-linear optimisation, and a pseudo-dynamic iteration. The four statical methods include an analytical method, the formulation of linear equations of equilibrium in terms of force densities, an energy minimisation, and a search for the equilibrium configurations of the struts of the structure connected by cables whose lengths are to be determined, using a reduced set of equilibrium equations. It is concluded that the kinematical methods are best suited to obtaining only configuration details of structures that are already essentially known, the force density method is best suited to searching for new configurations, but affords no control over the lengths of the elements of the structure. The reduced coordinates method offers a greater control on elements lengths, but requires more extensive symbolic manipulations.",
"title": ""
}
] |
[
{
"docid": "2fd06457db3dfb09af108d22607a923d",
"text": "An analysis of an on-chip buck converter is presented in this paper. A high switching frequency is the key design parameter that simultaneously permits monolithic integration and high efficiency. A model of the parasitic impedances of a buck converter is developed. With this model, a design space is determined that allows integration of active and passive devices on the same die for a target technology. An efficiency of 88.4% at a switching frequency of 477 MHz is demonstrated for a voltage conversion from 1.2–0.9 volts while supplying 9.5 A average current. The area occupied by the buck converter is 12.6 mm assuming an 80-nm CMOS technology. An estimate of the efficiency is shown to be within 2.4% of simulation at the target design point. Full integration of a high-efficiency buck converter on the same die with a dualmicroprocessor is demonstrated to be feasible.",
"title": ""
},
{
"docid": "a9ad415524996446ea1204ad5ff11d89",
"text": "Crime against women is increasing at an alarming rate in almost all parts of India. Women in the Indian society have been victims of humiliation, torture and exploitation. It has even existed in the past but only in the recent years the issues have been brought to the open for concern. According to the latest data released by the National Crime Records Bureau (NCRB), crime against women have increased more than doubled over the past ten years. While a number of analyses have been done in the field of crime pattern detection, none have done an extensive study on the crime against women in India. The present paper describes a behavioural analysis of crime against women in India from the year 2001 to 2014. The study evaluates the efficacy of Infomap clustering algorithm for detecting communities of states and union territories in India based on crimes. As it is a graph based clustering approach, all the states of India along with the union territories have been considered as nodes of the graph and similarity among the nodes have been measured based on different types of crimes. Each community is a group of states and / or union territories which are similar based on crime trends. Initially, the method finds the communities based on current year crime data, subsequently at the end of a year when new crime data for the next year is available, the graph is modified and new communities are formed. The process is repeated year wise that helps to predict how crime against women has significantly increased in various states of India over the past years. It also helps in rapid visualisation and identification of states which are densely affected with crimes. This approach proves to be quite effective and can also be used for analysing the global crime scenario.",
"title": ""
},
{
"docid": "3e1ff2ac72da8525d358c5dcf160c4b4",
"text": "Esthetic management of extensively decayed primary maxillary anterior teeth requiring full coronal coverage restoration is usually challenging to the pediatric dentists especially in very young children. Many esthetic options have been tried over the years each having its own advantages, disadvantages and associated technical, functional or esthetic limitations. Zirconia crowns have provided a treatment alternative to address the esthetic concerns and ease of placement of extra-coronal restorations on primary anterior teeth. The present article presents a case where grossly decayed maxillary primary incisors were restored esthetically and functionally with ready made zirconia crowns (ZIRKIZ, HASS Corp; Korea). After endodontic treatment the decayed teeth were restored with zirconia crowns. Over a 30 months period, the crowns have demonstrated good retention and esthetic results. Dealing with esthetic needs in children with extensive loss of tooth structure, using Zirconia crowns would be practical and successful. The treatment described is simple and effective and represents a promising alternative for rehabilitation of decayed primary teeth.",
"title": ""
},
{
"docid": "1397a3996f2283ff718512af5b9a6294",
"text": "Two experiments showed that framing an athletic task as diagnostic of negative racial stereotypes about Black or White athletes can impede their performance in sports. In Experiment 1, Black participants performed significantly worse than did control participants when performance on a golf task was framed as diagnostic of \"sports intelligence.\" In comparison, White participants performed worse than did control participants when the golf task was framed as diagnostic of \"natural athletic ability.\" Experiment 2 observed the effect of stereotype threat on the athletic performance of White participants for whom performance in sports represented a significant measure of their self-worth. The implications of the findings for the theory of stereotype threat (C. M. Steele, 1997) and for participation in sports are discussed.",
"title": ""
},
{
"docid": "7e422bc9e691d552543c245e7c154cbf",
"text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.",
"title": ""
},
{
"docid": "0c19ca429652a17dac44940a0f769595",
"text": "To inform theory and to investigate the practical application of prediction markets in a setting where the distribution of information across agents is critical, we conducted markets designed to forecast post-IPO valuations before a particularly unique IPO: Google. Because prediction markets allow us to infer the distribution of information before the IPO, the combination of results from our markets and the unique features of the IPO help us distinguish between underpricing theories. The evidence leans against theories which require large payments to buyers to overcome problems of asymmetric information between issuers and buyers. It is most consistent with theories where underpricing is in exchange for future benefits. This is but one of many potential applications for prediction markets in testing information-based theories. JEL Classification Codes: C53, C93, G10, G14, G24, G32",
"title": ""
},
{
"docid": "14c786d87fc06ab85ad41f6f6c30db21",
"text": "When an attacker tries to penetrate the network, there are many defensive systems, including intrusion detection systems (IDSs). Most IDSs are capable of detecting many attacks, but can not provide a clear idea to the analyst because of the huge number of false alerts generated by these systems. This weakness in the IDS has led to the emergence of many methods in which to deal with these alerts, minimize them and highlight the real attacks. It has come to a stage to take a stock of the research results a comprehensive view so that further research in this area will be motivated objectively to fulfill the gaps",
"title": ""
},
{
"docid": "0ce57a66924192a50728fb67023e0ed2",
"text": "Most studies on TCP over multi-hop wireless ad hoc networks have only addressed the issue of performance degradation due to temporarily broken routes, which results in TCP inability to distinguish between losses due to link failures or congestion. This problem tends to become more serious as network mobility increases. In this work, we tackle the equally important capture problem to which there has been little or no solution, and is present mostly in static and low mobility multihop wireless networks. This is a result of the interplay between the MAC layer and TCP backoff policies, which causes nodes to unfairly capture the wireless shared medium, hence preventing neighboring nodes to access the channel. This has been shown to have major negative effects on TCP performance comparable to the impact of mobility. We propose a novel algorithm, called COPAS (COntention-based PAth Selection), which incorporates two mechanisms to enhance TCP performance by avoiding capture conditions. First, it uses disjoint forward (sender to receiver for TCP data) and reverse (receiver to sender for TCP ACKs) paths in order to minimize the conflicts of TCP data and ACK packets. Second, COPAS employs a dynamic contentionbalancing scheme where it continuously monitors and changes forward and reverse paths according to the level of MAC layer contention, hence minimizing the likelihood of capture. Through extensive simulation, COPAS is shown to improve TCP throughput by up to 90% while keeping routing overhead low.",
"title": ""
},
{
"docid": "13ae9c0f1c802de86b80906558b27713",
"text": "Anaerobic saccharolytic bacteria thriving at high pH values were studied in a cellulose-degrading enrichment culture originating from the alkaline lake, Verkhneye Beloye (Central Asia). In situ hybridization of the enrichment culture with 16S rRNA-targeted probes revealed that abundant, long, thin, rod-shaped cells were related to Cytophaga. Bacteria of this type were isolated with cellobiose and five isolates were characterized. Isolates were thin, flexible, gliding rods. They formed a spherical cyst-like structure at one cell end during the late growth phase. The pH range for growth was 7.5–10.2, with an optimum around pH 8.5. Cultures produced a pinkish pigment tentatively identified as a carotenoid. Isolates did not degrade cellulose, indicating that they utilized soluble products formed by so far uncultured hydrolytic cellulose degraders. Besides cellobiose, the isolates utilized other carbohydrates, including xylose, maltose, xylan, starch, and pectin. The main organic fermentation products were propionate, acetate, and succinate. Oxygen, which was not used as electron acceptor, impaired growth. A representative isolate, strain Z-7010, with Marinilabilia salmonicolor as the closest relative, is described as a new genus and species, Alkaliflexus imshenetskii. This is the first cultivated alkaliphilic anaerobic member of the Cytophaga/Flavobacterium/Bacteroides phylum.",
"title": ""
},
{
"docid": "224bacc72ba9785d158f506eea68e4c9",
"text": "A model of commumcations protocols based on finite-state machines is investigated. The problem addressed is how to ensure certain generally desirable properties, which make protocols \"wellformed,\" that is, specify a response to those and only those events that can actually occur. It is determined to what extent the problem is solvable, and one approach to solving it ts described. Categories and SubJect Descriptors' C 2 2 [Computer-Conununication Networks]: Network Protocols-protocol verification; F 1 1 [Computation by Abstract Devices] Models of Computation--automata; G.2.2 [Discrete Mathematics] Graph Theory--graph algoruhms; trees General Terms: Reliability, Verification Additional",
"title": ""
},
{
"docid": "ec4b7c50f3277bb107961c9953fe3fc4",
"text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview",
"title": ""
},
{
"docid": "03b08a01be48aaa76684411b73e5396c",
"text": "The goal of TREC 2015 Clinical Decision Support Track was to retrieve biomedical articles relevant for answering three kinds of generic clinical questions, namely diagnosis, test, and treatment. In order to achieve this purpose, we investigated three approaches to improve the retrieval of relevant articles: modifying queries, improving indexes, and ranking with ensembles. Our final submissions were a combination of several different configurations of these approaches. Our system mainly focused on the summary fields of medical reports. We built two different kinds of indexes – an inverted index on the free text and a second kind of indexes on the Unified Medical Language System (UMLS) concepts within the entire articles that were recognized by MetaMap. We studied the variations of including UMLS concepts at paragraph and sentence level and experimented with different thresholds of MetaMap matching scores to filter UMLS concepts. The query modification process in our system involved automatic query construction, pseudo relevance feedback, and manual inputs from domain experts. Furthermore, we trained a re-ranking sub-system based on the results of TREC 2014 Clinical Decision Support track using Indri’s Learning to Rank package, RankLib. Our experiments showed that the ensemble approach could improve the overall results by boosting the ranking of articles that are near the top of several single ranked lists.",
"title": ""
},
{
"docid": "58119c2fc5e4b9d57d1f1e8f0e525e06",
"text": "OBJECTIVES\nDetecting hints to public health threats as early as possible is crucial to prevent harm from the population. However, many disease surveillance strategies rely upon data whose collection requires explicit reporting (data transmitted from hospitals, laboratories or physicians). Collecting reports takes time so that the reaction time grows. Moreover, context information on individual cases is often lost in the collection process. This paper describes a system that tries to address these limitations by processing social media for identifying information on public health threats. The primary objective is to study the usefulness of the approach for supporting the monitoring of a population's health status.\n\n\nMETHODS\nThe developed system works in three main steps: Data from Twitter, blogs, and forums as well as from TV and radio channels are continuously collected and filtered by means of keyword lists. Sentences of relevant texts are classified relevant or irrelevant using a binary classifier based on support vector machines. By means of statistical methods known from biosurveillance, the relevant sentences are further analyzed and signals are generated automatically when unexpected behavior is detected. From the generated signals a subset is selected for presentation to a user by matching with user queries or profiles. In a set of evaluation experiments, public health experts assessed the generated signals with respect to correctness and relevancy. In particular, it was assessed how many relevant and irrelevant signals are generated during a specific time period.\n\n\nRESULTS\nThe experiments show that the system provides information on health events identified in social media. Signals are mainly generated from Twitter messages posted by news agencies. Personal tweets, i.e. tweets from persons observing some symptoms, only play a minor role for signal generation given a limited volume of relevant messages. Relevant signals referring to real world outbreaks were generated by the system and monitored by epidemiologists for example during the European football championship. But, the number of relevant signals among generated signals is still very small: The different experiments yielded a proportion between 5 and 20% of signals regarded as \"relevant\" by the users. Vaccination or education campaigns communicated via Twitter as well as use of medical terms in other contexts than for outbreak reporting led to the generation of irrelevant signals.\n\n\nCONCLUSIONS\nThe aggregation of information into signals results in a reduction of monitoring effort compared to other existing systems. Against expectations, only few messages are of personal nature, reporting on personal symptoms. Instead, media reports are distributed over social media channels. Despite the high percentage of irrelevant signals generated by the system, the users reported that the effort in monitoring aggregated information in form of signals is less demanding than monitoring huge social-media data streams manually. It remains for the future to develop strategies for reducing false alarms.",
"title": ""
},
{
"docid": "7fbabad906acad0cda82776b313a1fdf",
"text": "The mineralogy of terrestrial planets evolves as a consequence of a range of physical, chemical, and biological processes. In pre-stellar molecular clouds, widely dispersed microscopic dust particles contain approximately a dozen refractory minerals that represent the starting point of planetary mineral evolution. Gravitational clumping into a protoplanetary disk, star formation, and the resultant heating in the stellar nebula produce primary refractory constituents of chondritic meteorites, including chondrules and calcium-aluminum inclusions, with ~60 different mineral phases. Subsequent aqueous and thermal alteration of chondrites, asteroidal accretion and differentiation, and the consequent formation of achondrites results in a mineralogical repertoire limited to ~250 different minerals found in unweathered meteorite samples. Following planetary accretion and differentiation, the initial mineral evolution of Earth’s crust depended on a sequence of geochemical and petrologic processes, including volcanism and degassing, fractional crystallization, crystal settling, assimilation reactions, regional and contact metamorphism, plate tectonics, and associated large-scale fluid-rock interactions. These processes produced the first continents with their associated granitoids and pegmatites, hydrothermal ore deposits, metamorphic terrains, evaporites, and zones of surface weathering, and resulted in an estimated 1500 different mineral species. According to some origin-of-life scenarios, a planet must progress through at least some of these stages of chemical processing as a prerequisite for life. Biological processes began to affect Earth’s surface mineralogy by the Eoarchean Era (~3.85–3.6 Ga), when large-scale surface mineral deposits, including banded iron formations, were precipitated under the influences of changing atmospheric and ocean chemistry. The Paleoproterozoic “Great Oxidation Event” (~2.2 to 2.0 Ga), when atmospheric oxygen may have risen to >1% of modern levels, and the Neoproterozoic increase in atmospheric oxygen, which followed several major glaciation events, ultimately gave rise to multicellular life and skeletal biomineralization and irreversibly transformed Earth’s surface mineralogy. Biochemical processes may thus be responsible, directly or indirectly, for most of Earth’s 4300 known mineral species. The stages of mineral evolution arise from three primary mechanisms: (1) the progressive separation and concentration of the elements from their original relatively uniform distribution in the pre-solar nebula; (2) an increase in range of intensive variables such as pressure, temperature, and the activities of H2O, CO2, and O2; and (3) the generation of far-from-equilibrium conditions by living systems. The sequential evolution of Earth’s mineralogy from chondritic simplicity to Phanerozoic complexity introduces the dimension of geologic time to mineralogy and thus provides a dynamic alternate approach to framing, and to teaching, the mineral sciences.",
"title": ""
},
{
"docid": "6fbd64c7b38493c432bb140c544f3235",
"text": "It is well-known that people love food. However, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced computer vision tools to recognize food images (e.g. acquired with mobile/wearable cameras), as well as their properties (e.g., calories), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). The food recognition is a challenging task since the food is intrinsically deformable and presents high variability in appearance. Image representation plays a fundamental role. To properly study the peculiarities of the image representation in the food application context, a benchmark dataset is needed. These facts motivate the work presented in this paper. In this work we introduce the UNICT-FD889 dataset. It is the first food image dataset composed by over 800 distinct plates of food which can be used as benchmark to design and compare representation models of food images. We exploit the UNICT-FD889 dataset for Near Duplicate Image Retrieval (NDIR) purposes by comparing three standard state-of-the-art image descriptors: Bag of Textons, PRICoLBP and SIFT. Results confirm that both textures and colors are fundamental properties in food representation. Moreover the experiments point out that the Bag of Textons representation obtained considering the color domain is more accurate than the other two approaches for NDIR.",
"title": ""
},
{
"docid": "0521fe73626d12a3962934cf2b2ee2e9",
"text": "General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated. Keywords— MSW, management, sustainable, Thailand",
"title": ""
},
{
"docid": "eb5f3e139422ae4bb2ca73467ac1287d",
"text": "OBJECTIVES\nAutism spectrum disorders (ASD) are diagnosed based on early-manifesting clinical symptoms, including markedly impaired social communication. We assessed the viability of resting-state functional MRI (rs-fMRI) connectivity measures as diagnostic biomarkers for ASD and investigated which connectivity features are predictive of a diagnosis.\n\n\nMETHODS\nRs-fMRI scans from 59 high functioning males with ASD and 59 age- and IQ-matched typically developing (TD) males were used to build a series of machine learning classifiers. Classification features were obtained using 3 sets of brain regions. Another set of classifiers was built from participants' scores on behavioral metrics. An additional age and IQ-matched cohort of 178 individuals (89 ASD; 89 TD) from the Autism Brain Imaging Data Exchange (ABIDE) open-access dataset (http://fcon_1000.projects.nitrc.org/indi/abide/) were included for replication.\n\n\nRESULTS\nHigh classification accuracy was achieved through several rs-fMRI methods (peak accuracy 76.67%). However, classification via behavioral measures consistently surpassed rs-fMRI classifiers (peak accuracy 95.19%). The class probability estimates, P(ASD|fMRI data), from brain-based classifiers significantly correlated with scores on a measure of social functioning, the Social Responsiveness Scale (SRS), as did the most informative features from 2 of the 3 sets of brain-based features. The most informative connections predominantly originated from regions strongly associated with social functioning.\n\n\nCONCLUSIONS\nWhile individuals can be classified as having ASD with statistically significant accuracy from their rs-fMRI scans alone, this method falls short of biomarker standards. Classification methods provided further evidence that ASD functional connectivity is characterized by dysfunction of large-scale functional networks, particularly those involved in social information processing.",
"title": ""
},
{
"docid": "318a4af201ed3563443dcbe89c90b6b4",
"text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation",
"title": ""
},
{
"docid": "f0e3ee75e00ce2504a66523a7b304098",
"text": "In this paper, we describe a novel imagebased person identification task. Traditional facebased person identification methods have a low tolerance for occluded situation, such as overlapping of people in an image. We focus on an image from an overhead camera. Using the overhead camera reduces a restriction of the installation location of a camera and solves the problem of occluded images. First, our method identifies the person’s area in a captured image by using background subtraction. Then, it extracts four features from the area; (1) body size, (2) hair color, (3) hairstyle and (4) hair whorl. We apply the four features into the AdaBoost algorithm. Experimental result shows the effectiveness of our method.",
"title": ""
},
{
"docid": "0ee37f981c8967fa9376f43add592d35",
"text": "In this paper, we show how a 3D Morphable Model (i.e. a statistical model of the 3D shape of a class of objects such as faces) can be used to spatially transform input data as a module (a 3DMM-STN) within a convolutional neural network. This is an extension of the original spatial transformer network in that we are able to interpret and normalise 3D pose changes and self-occlusions. The trained localisation part of the network is independently useful since it learns to fit a 3D morphable model to a single image. We show that the localiser can be trained using only simple geometric loss functions on a relatively small dataset yet is able to perform robust normalisation on highly uncontrolled images including occlusion, self-occlusion and large pose changes.",
"title": ""
}
] |
scidocsrr
|
f3965f9c66c57f297199d82c30c1cf3c
|
Data analysis of Li-Ion and lead acid batteries discharge parameters with Simulink-MATLAB
|
[
{
"docid": "5208762a8142de095c21824b0a395b52",
"text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.",
"title": ""
}
] |
[
{
"docid": "c355dc8d0ec6b673cea3f2ab39d13701",
"text": "Errors in estimating and forecasting often result from the failure to collect and consider enough relevant information. We examine whether attributes associated with persistence in information acquisition can predict performance in an estimation task. We focus on actively open-minded thinking (AOT), need for cognition, grit, and the tendency to maximize or satisfice when making decisions. In three studies, participants made estimates and predictions of uncertain quantities, with varying levels of control over the amount of information they could collect before estimating. Only AOT predicted performance. This relationship was mediated by information acquisition: AOT predicted the tendency to collect information, and information acquisition predicted performance. To the extent that available information is predictive of future outcomes, actively open-minded thinkers are more likely than others to make accurate forecasts.",
"title": ""
},
{
"docid": "d0c8e58e06037d065944fc59b0bd7a74",
"text": "We propose a new discrete choice model that generalizes the random utility model (RUM). We show that this model, called the Generalized Stochastic Preference (GSP) model can explain several choice phenomena that can’t be represented by a RUM. In particular, the model can easily (and also exactly) replicate some well known examples that are not RUM, as well as controlled choice experiments carried out since 1980’s that possess strong regularity violations. One of such regularity violation is the decoy effect in which the probability of choosing a product increases when a similar, but inferior product is added to the choice set. An appealing feature of the GSP is that it is non-parametric and therefore it has very high flexibility. The model has also a simple description and interpretation: it builds upon the well known representation of RUM as a stochastic preference, by allowing some additional consumer types to be non-rational.",
"title": ""
},
{
"docid": "3a31192482674f400e6230f35c7bfe38",
"text": "This paper introduces Parsing to Programs, a framework that combines ideas from parsing and probabilistic programming for situated question answering. As a case study, we build a system that solves pre-university level Newtonian physics questions. Our approach represents domain knowledge of Newtonian physics as programs. When presented with a novel question, the system learns a formal representation of the question by combining interpretations from the question text and any associated diagram. Finally, the system uses this formal representation to solve the questions using the domain knowledge. We collect a new dataset of Newtonian physics questions from a number of textbooks and use it to train our system. The system achieves near human performance on held-out textbook questions and section 1 of AP Physics C mechanics - both on practice questions as well as on freely available actual exams held in 1998 and 2012.",
"title": ""
},
{
"docid": "b912b32d9f1f4e7a5067450b98870a71",
"text": "As of May 2013, 56 percent of American adults had a smartphone, and most of them used it to access the Internet. One-third of smartphone users report that their phone is the primary way they go online. Just as the Internet changed retailing in the late 1990s, many argue that the transition to mobile, sometimes referred to as “Web 3.0,” will have a similarly disruptive effect (Brynjolfsson et al. 2013). In this paper, we aim to document some early effects of how mobile devices might change Internet and retail commerce. We present three main findings based on an analysis of eBay’s mobile shopping application and core Internet platform. First, and not surprisingly, the early adopters of mobile e-commerce applications appear",
"title": ""
},
{
"docid": "e42192f9d4d33f92939a04361e1bb706",
"text": "Today bone fractures are very common in our country because of road accidents or through other injuries. The X-Ray images are the most common accessibility of peoples during the accidents. But the minute fracture detection in X-Ray image is not possible due to low resolution and quality of the original X-Ray image. The complexity of bone structure and the difference in visual characteristics of fracture by their location. So it is difficult to accurately detect and locate the fractures also determine the severity of the injury. The automatic detection of fractures in X-Ray images is a significant contribution for assisting the physicians in making faster and more accurate patient diagnostic decisions and treatment planning. In this paper, an automatic hierarchical algorithm for detecting bone fracture in X-Ray image is proposed. It uses the Gray level cooccurrence matrix for detecting the fracture. The results are promising, demonstrating that the proposed method is capable of automatically detecting both major and minor fractures accurately, and shows potential for clinical application. Statistical results also indicate the superiority of the proposed methods compared to other techniques. This paper examines the development of such a system, for the detection of long-bone fractures. This project fully employed MATLAB 7.8.0 (.r2009a) as the programming tool for loading image, image processing and user interface development. Results obtained demonstrate the performance of the pelvic bone fracture detection system with some limitations.",
"title": ""
},
{
"docid": "84d4d99ad90c4d05b827f4dde7f07d52",
"text": "Diffusions of new products and technologies through social networks can be formalized as spreading of infectious diseases. However, while epidemiological models describe infection in terms of transmissibility, we propose a diffusion model that explicitly includes consumer decision-making affected by social influences and word-of-mouth processes. In our agent-based model consumers’ probability of adoption depends on the external marketing effort and on the internal influence that each consumer perceives in his/her personal networks. Maintaining a given marketing effort and assuming its effect on the probability of adoption as linear, we can study how social processes affect diffusion dynamics and how the speed of the diffusion depends on the network structure and on consumer heterogeneity. First, we show that the speed of diffusion changes with the degree of randomness in the network. In markets with high social influence and in which consumers have a sufficiently large local network, the speed is low in regular networks, it increases in small-world networks and, contrarily to what epidemic models suggest, it becomes very low again in random networks. Second, we show that heterogeneity helps the diffusion. Ceteris paribus and varying the degree of heterogeneity in the population of agents simulation results show that the S. A. Delre ( ) . W. Jager Faculty of Management and Organization, Department of Marketing, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands e-mail: s.a.delre@rug.nl W. Jager e-mail: w.jager@rug.nl M. A. Janssen School of Human Evolution and Social Change & Department of Computer Science and Engineering, Arizona State University, Box 872402, Tempe, AZ 85287-2402 e-mail: Marco.Janssen@asu.edu",
"title": ""
},
{
"docid": "903b68096d2559f0e50c38387260b9c8",
"text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.",
"title": ""
},
{
"docid": "34e2eafd055e097e167afe7cb244f99b",
"text": "This paper describes the functional verification effort during a specific hardware development program that included three of the largest ASICs designed at Nortel. These devices marked a transition point in methodology as verification took front and centre on the critical path of the ASIC schedule. Both the simulation and emulation strategies are presented. The simulation methodology introduced new techniques such as ASIC sub-system level behavioural modeling, large multi-chip simulations, and random pattern simulations. The emulation strategy was based on a plan that consisted of integrating parts of the real software on the emulated system. This paper describes how these technologies were deployed, analyzes the bugs that were found and highlights the bottlenecks in functional verification as systems become more complex.",
"title": ""
},
{
"docid": "19ff822c54e6aee920a4a63243d07839",
"text": "Noma is an opportunistic infection promoted by extreme poverty. It evolves rapidly from a gingival inflammation to grotesque orofacial gangrene. It occurs worldwide, but is most common in sub-Saharan Africa. The peak incidence of acute noma is at ages 1-4 years, coinciding with the period of linear growth retardation in deprived children. Noma is a scourge in communities with poor environmental sanitation. It results from complex interactions between malnutrition, infections, and compromised immunity. Diseases that commonly precede noma include measles, malaria, severe diarrhoea, and necrotising ulcerative gingivitis. The acute stage responds readily to antibiotic treatment. The sequelae after healing include variable functional and aesthetic impairments, which require reconstructive surgery. Noma can be prevented through promotion of national awareness of the disease, poverty reduction, improved nutrition, promotion of exclusive breastfeeding in the first 3-6 months of life, optimum prenatal care, and timely immunisations against the common childhood diseases.",
"title": ""
},
{
"docid": "86fd3a2dd99b85f6de59dca495375565",
"text": "To help elderly and physically disabled people to become self-reliant in daily life such as at home or a health clinic, we have developed a network-type brain machine interface (BMI) system called “network BMI” to control real-world actuators like wheelchairs based on human intention measured by a portable brain measurement system. In this paper, we introduce the technologies for achieving the network BMI system to support activities of daily living. key words: brain machine interface, smart house, data analysis, network agent",
"title": ""
},
{
"docid": "48019a3106c6d74e4cfcc5ac596d4617",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "a78782e389313600620bfb68fc57a81f",
"text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.",
"title": ""
},
{
"docid": "6f99c3fe7d99aa7f00a3e3eb8856db97",
"text": "The 3-D modeling technique presented in this paper, predicts, with high accuracy, electromagnetic fields and corresponding dynamic effects in conducting regions for rotating machines with slotless windings, e.g., self-supporting windings. The presented modeling approach can be applied to a wide variety of slotless winding configurations, including skewing and/or different winding shapes. It is capable to account for induced eddy currents in the conductive rotor parts, e.g., permanent-magnet (PM) eddy-current losses, albeit not iron, and winding ac losses. The specific focus of this paper is to provide the reader with the complete implementation and assumptions details of such a 3-D semianalytical approach, which allows model validations with relatively short calculation times. This model can be used to improve future design optimizations for machines with 3-D slotless windings. It has been applied, in this paper, to calculate fixed parameter Faulhaber, rhombic, and diamond slotless PM machines to illustrate accuracy and applicability.",
"title": ""
},
{
"docid": "1cbf4840e09a950a5adfcbbfbd476d6a",
"text": "We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixedlength hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.",
"title": ""
},
{
"docid": "d2401987609efcb5a7fe420d48dfec1b",
"text": "Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets. The Fully Independent Training Conditional (FITC) and the Variational Free Energy (VFE) approximations are two recent popular methods. Despite superficial similarities, these approximations have surprisingly different theoretical properties and behave differently in practice. We thoroughly investigate the two methods for regression both analytically and through illustrative examples, and draw conclusions to guide practical application.",
"title": ""
},
{
"docid": "31b449b209beaadbbcc36c485517c3cf",
"text": "While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically-linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.",
"title": ""
},
{
"docid": "b37064e74a2c88507eacb9062996a911",
"text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "5ccb3ab32054741928b8b93eea7a9ce2",
"text": "A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given.",
"title": ""
}
] |
scidocsrr
|
be7e75456e2e60a4f6ea8ffead56a717
|
A Combined Wye-Delta Connection to Increase the Performance of Axial-Flux PM Machines With Concentrated Windings
|
[
{
"docid": "97fa48d92c4a1b9d2bab250d5383173c",
"text": "This paper presents a new type of axial flux motor, the yokeless and segmented armature (YASA) topology. The YASA motor has no stator yoke, a high fill factor and short end windings which all increase torque density and efficiency of the machine. Thus, the topology is highly suited for high performance applications. The LIFEcar project is aimed at producing the world's first hydrogen sports car, and the first YASA motors have been developed specifically for the vehicle. The stator segments have been made using powdered iron material which enables the machine to be run up to 300 Hz. The iron in the stator of the YASA motor is dramatically reduced when compared to other axial flux motors, typically by 50%, causing an overall increase in torque density of around 20%. A detailed Finite Element analysis (FEA) analysis of the YASA machine is presented and it is shown that the motor has a peak efficiency of over 95%.",
"title": ""
}
] |
[
{
"docid": "c82c32d057557903184e55f0f76c7a4e",
"text": "An experimental program of steel panel shear walls is outlined and some results are presented. The tested specimens utilized low yield strength (LYS) steel infill panels and reduced beam sections (RBS) at the beam-ends. Two specimens make allowances for penetration of the panel by utilities, which would exist in a retrofit situation. The first, consisting of multiple holes, or perforations, in the steel panel, also has the characteristic of further reducing the corresponding solid panel strength (as compared with the use of traditional steel). The second such specimen utilizes quarter-circle cutouts in the panel corners, which are reinforced to transfer the panel forces to the adjacent framing.",
"title": ""
},
{
"docid": "f74aa960091bef1701dbc616657facb3",
"text": "Adverse reactions and unintended effects can occasionally occur with toxins for cosmetic use, even although they generally have an outstanding safety profile. As the use of fillers becomes increasingly more common, adverse events can be expected to increase as well. This article discusses complication avoidance, addressing appropriate training and proper injection techniques, along with patient selection and patient considerations. In addition to complications, avoidance or amelioration of common adverse events is discussed.",
"title": ""
},
{
"docid": "1f40d62c69146766b47b5683e8819751",
"text": "Studies on protein production using filamentous fungi have mostly focused on improvement of the protein yields by genetic modifications such as overexpression. Recent genome sequencing in several filamentous fungal species now enables more systematic approaches based on reverse genetics and molecular biology of the secretion pathway. In this review, we summarize recent molecular-based advances in our understanding of vesicular trafficking in filamentous fungi, and discuss insights into their high secretion ability and application for protein production.",
"title": ""
},
{
"docid": "cc3b36d8026396a7a931f07ef9d3bcfb",
"text": "Planning an itinerary before traveling to a city is one of the most important travel preparation activities. In this paper, we propose a novel framework called TripPlanner, leveraging a combination of location-based social network (i.e., LBSN) and taxi GPS digital footprints to achieve personalized, interactive, and traffic-aware trip planning. First, we construct a dynamic point-of-interest network model by extracting relevant information from crowdsourced LBSN and taxi GPS traces. Then, we propose a two-phase approach for personalized trip planning. In the route search phase, TripPlanner works interactively with users to generate candidate routes with specified venues. In the route augmentation phase, TripPlanner applies heuristic algorithms to add user's preferred venues iteratively to the candidate routes, with the objective of maximizing the route score while satisfying both the venue visiting time and total travel time constraints. To validate the efficiency and effectiveness of the proposed approach, extensive empirical studies were performed on two real-world data sets from the city of San Francisco, which contain more than 391 900 passenger delivery trips generated by 536 taxis in a month and 110 214 check-ins left by 15 680 Foursquare users in six months.",
"title": ""
},
{
"docid": "0447990a97b8f58a643e7af51678e29c",
"text": "The ultimate aim of realistic graphics is the creation of images that provoke the same responses that a viewer would have to a real scene. This STAR addresses two related key problem areas in this effort which are located at opposite ends of the rendering pipeline, namely the data structures used to describe light during the actual rendering process, and the issue of displaying such radiant intensities in a meaningful way. The interest in the first of these subproblems stems from the fact that it is common industry practice to use RGB colour values to describe light intensity and surface reflectancy. While viable in the context of methods that do not strive to achieve true realism, this approach has to be replaced by more physically accurate techniques if a prediction of nature is intended. The second subproblem is that while research into ways of rendering images provides us with better and faster methods, we do not necessarily see their full effect due to limitations of the display hardware. The low dynamic range of a standard computer monitor requires some form of mapping to produce images that are perceptually accurate. Tone reproduction operators attempt to replicate the effect of real-world luminance intensities. This STAR report will review the work to date on spectral rendering and tone reproduction techniques. It will include an investigation into the need for spectral imagery synthesis methods and accurate tone reproduction, and a discussion of major approaches to physically correct rendering and key tone mapping algorithms. The future of both spectral rendering and tone reproduction techniques will be considered, together with the implications of advances in display hardware.",
"title": ""
},
{
"docid": "fa6e0549b1b41a2a134675404301b9bd",
"text": "To a significant degree, multimedia applications derive their effectiveness from the use of color graphics, images, and videos. In these applications, human visual system (HVS) often gives the final evaluation of the processed results. In this paper, we first propose a novel color image enhancement method, which is named HVS Controlled Color Image Enhancement and Evaluation algorithm (HCCIEE algorithm). We then applied the HCCIEE to color image by considering natural image quality metrics. This HCCIEE algorithm is base on multiscale representation of pattern, luminance, and color processing in the HVS. Experiments illustrated that the HCCIEE algorithm can produce distinguished details without ringing or halo artifacts. (These two problems often occur in conventional multiscale enhancement techniques.) As a result, the experimental results appear as similar as possible to the viewers’ perception of the actual scenes. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "eea4f0555cdf4050bdb4681c7a50c01d",
"text": "In this paper, a review on condition monitoring of induction motors is first presented. Then, an ensemble of hybrid intelligent models that is useful for condition monitoring of induction motors is proposed. The review covers two parts, i.e., (i) a total of nine commonly used condition monitoring methods of induction motors; and (ii) intelligent learning models for condition monitoring of induction motors subject to single and multiple input signals. Based on the review findings, the Motor Current Signature Analysis (MCSA) method is selected for this study owing to its online, non-invasive properties and its requirement of only single input source; therefore leading to a cost-effective condition monitoring method. A hybrid intelligent model that consists of the Fuzzy Min-Max (FMM) neural network and the Random Forest (RF) model comprising an ensemble of Classification and Regression Trees is developed. The majority voting scheme is used to combine the predictions produced by the resulting FMM-RF ensemble (or FMM-RFE) members. A benchmark problem is first deployed to evaluate the usefulness of the FMM-RFE model. Then, the model is applied to condition monitoring of induction motors using a set of real data samples. Specifically, the stator current signals of induction motors are obtained using the MCSA method. The signals are processed to produce a set of harmonic-based features for classification using the FMM-RFE model. The experimental results show good performances in both noise-free and noisy environments. More importantly, a set of explanatory rules in the form of a decision tree can be extracted from the FMM-RFE model to justify its predictions. The outcomes ascertain the effectiveness of the proposed FMM-RFE model in undertaking condition monitoring tasks, especially for induction motors, under different environments.",
"title": ""
},
{
"docid": "532463ff1e5e91a2f9054cb86dcfa654",
"text": "During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-di↵erence time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied for various application contexts including those requiring to model light/matter interactions on the nanoscale. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.",
"title": ""
},
{
"docid": "d9f7d78b6e1802a17225db13edd033f6",
"text": "The edit distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations we admit are deleting, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the longest common subsequence of two strings is a special case of the problem of computing edit distances. We describe an algorithm for computing the edit distance between two strings of length n and m, n > m, which requires O(n * max( 1, m/log n)) steps whenever the costs of edit operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.",
"title": ""
},
{
"docid": "76d5bb6cd7e6ee374a958100adb4b1b1",
"text": "Technical developments in computer hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.",
"title": ""
},
{
"docid": "61165fc9e404ef0fdf3c2525845cf032",
"text": "The automated comparison of points of view between two politicians is a very challenging task, due not only to the lack of annotated resources, but also to the different dimensions participating to the definition of agreement and disagreement. In order to shed light on this complex task, we first carry out a pilot study to manually annotate the components involved in detecting agreement and disagreement. Then, based on these findings, we implement different features to capture them automatically via supervised classification. We do not focus on debates in dialogical form, but we rather consider sets of documents, in which politicians may express their position with respect to different topics in an implicit or explicit way, like during an electoral campaign. We create and make available three different datasets.",
"title": ""
},
{
"docid": "72780ac77edf6ee582a1825a9bee8aab",
"text": "Current methods for formation of detected chess-board vertices into a grid structure tend to be weak in situations with a warped grid, and false and missing vertex-features. In this paper we present a highly robust, yet efficient, scheme suitable for inference of regular 2D square mesh structure from vertices recorded both during projection of a chess-board pattern onto 3D objects, and in the more simple case of camera calibration. Examples of the method's performance in a lung function measuring application, observing chess-boards projected on to patients' chests, are given. The method presented is resilient to significant surface deformation, and tolerates inexact vertex-feature detection. This robustness results from the scheme's novel exploitation of feature orientation information.",
"title": ""
},
{
"docid": "4e2fbac1742c7afe9136e274150d6ee9",
"text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel generative model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation-specific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.",
"title": ""
},
{
"docid": "518090ef17c65c643287c65660eed699",
"text": "AbstructThis paper presents solutions to the entropyconstrained scalar quantizer (ECSQ) design problem for two sources commonly encountered in image and speech compression applications: sources having the exponential and Laplacian probability density functions. We use the memoryless property of the exponential distribution to develop a new noniterative algorithm for obtaining the optimal quantizer design. We show how to obtain the optimal ECSQ either with or without an additional constraint on the number of levels in the quantizer. In contrast to prior methods, which require multidimensional iterative solution of a large number of nonlinear equations, the new method needs only a single sequence of solutions to one-dimensional nonlinear equations (in some Laplacian cases, one additional two-dimensional solution is needed). As a result, the new method is orders of magnitude faster than prior ones. We show that as the constraint on the number of levels in the quantizer is relaxed, the optimal ECSQ becomes a uniform threshold quantizer (UTQ) for exponential, but not for Laplacian sources. We then further examine the performance of the UTQ and optimal ECSQ, and also investigate some interesting alternatives to the UTQ, including a uniform-reconstruction quantizer (URQ) and a constant dead-zone ratio quantizer (CDZRQ).",
"title": ""
},
{
"docid": "2bc456232359ac45a4c23aad36712e3b",
"text": "Augmented Reality (AR) offers many opportunities as an interactive tool to improve learning and teaching processes. This paper presents a pilot study conducted on third graders where a series of AR contents about Natural and Social Sciences have been used as a teaching tool. The main objective of the application is to help the students to learn complex concepts that present a difficult understanding. A user centered approach was followed to create the AR contents, where several teachers collaborated on its definition. The AR application combines 3D models and animations, mini games and quizzes. An evaluation of the educational contents was made from the point of view efficiency (academic achievement), usability and motivation. Results confirm that the use of desktop AR has had a positive impact on the learning and teaching process and reinforce the vision of AR as an affordable and feasible technological tool to support the students' learning activities.",
"title": ""
},
{
"docid": "0f421a4ee46535f01390e04fa24b5502",
"text": "Wireless sensor networks (WSNs) are autonomous networks of spatially distributed sensor nodes that are capable of wirelessly communicating with each other in a multihop fashion. Among different metrics, network lifetime and utility, and energy consumption in terms of carbon footprint are key parameters that determine the performance of such a network and entail a sophisticated design at different abstraction levels. In this paper, wireless energy harvesting (WEH), wake-up radio (WUR) scheme, and error control coding (ECC) are investigated as enabling solutions to enhance the performance of WSNs while reducing its carbon footprint. Specifically, a utility-lifetime maximization problem incorporating WEH, WUR, and ECC, is formulated and solved using distributed dual subgradient algorithm based on the Lagrange multiplier method. Discussion and verification through simulation results show how the proposed solutions improve network utility, prolong the lifetime, and pave the way for a greener WSN by reducing its carbon footprint.",
"title": ""
},
{
"docid": "ece5a86fef126ae5224c4a9f56fe787c",
"text": "Security places an important role in communication applications for secure data transfers. Image Steganography is one of the most reliable technique in encryption and decryption of an image (hidden) inside other image (cover) such way that only cover image is visible. In this paper frequency domain Image Steganography using DWT and Modified LSB technique is proposed. The proposed approach uses DWT to convert spatial domain information to frequency domain information. The LL band is used for further Image Steganographic process. The image is decoded using inverse LSB. Since the LL band is used for encoding and decoding purpose, memory requirement of the design is less for hardware implementation. Also this will increase the operating frequency of the architecture. The proposed technique obtains high PSNR for both stegano and recovered hidden image.",
"title": ""
},
{
"docid": "0e459d7e3ffbf23c973d4843f701a727",
"text": "The role of psychological flexibility in mental health stigma and psychological distress for the stigmatizer.",
"title": ""
},
{
"docid": "e6e74971af2576ff119d277927727659",
"text": "In Germany there is limited information available about the distribution of the tropical rat mite (Ornithonyssus bacoti) in rodents. A few case reports show that this hematophagous mite species may also cause dermatitis in man. Having close body contact to small rodents is an important question for patients with pruritic dermatoses. The definitive diagnosis of this ectoparasitosis requires the detection of the parasite, which is more likely to be found in the environment of its host (in the cages, in the litter or in corners or cracks of the living area) than on the hosts' skin itself. A case of infestation with tropical rat mites in a family is reported here. Three mice that had been removed from the home two months before were the reservoir. The mites were detected in a room where the cage with the mice had been placed months ago. Treatment requires the eradication of the parasites on its hosts (by a veterinarian) and in the environment (by an exterminator) with adequate acaricides such as permethrin.",
"title": ""
},
{
"docid": "4c8cff1e750c8f4c9fc42df7113e0212",
"text": "Misdiagnosis is frequent in scabies of infants and children because of a low index of suspicion, secondary eczematous changes, and inappropriate therapy. Topical or systemic corticosteroids may modify the clinical presentation of scabies and that situation is referred to as scabies incognito. We describe a 10-month-old infant with scabies incognito mimicking urticaria pigmentosa.",
"title": ""
}
] |
scidocsrr
|
ddda67328d0fad84adf27ae539be4204
|
AN EXHAUSTIVE STUDY ON ASSOCIATION RULE MINING
|
[
{
"docid": "0d5ca0e11363cae0b4d7f335cf832e24",
"text": "This paper presents an investigation into two fuzzy association rule mining models for enhancing prediction performance. The first model (the FCM-Apriori model) integrates Fuzzy C-Means (FCM) and the Apriori approach for road traffic performance prediction. FCM is used to define the membership functions of fuzzy sets and the Apriori approach is employed to identify the Fuzzy Association Rules (FARs). The proposed model extracts knowledge from a database for a Fuzzy Inference System (FIS) that can be used in prediction of a future value. The knowledge extraction process and the performance of the model are demonstrated through two case studies of road traffic data sets with different sizes. The experimental results show the merits and capability of the proposed KD model in FARs based knowledge extraction. The second model (the FCM-MSapriori model) integrates FCM and a Multiple Support Apriori (MSapriori) approach to extract the FARs. These FARs provide the knowledge base to be utilized within the FIS for prediction evaluation. Experimental results have shown that the FCM-MSapriori model predicted the future values effectively and outperformed the FCM-Apriori model and other models reported in the literature.",
"title": ""
},
{
"docid": "a79424d0ec38c2355b288364f45f90de",
"text": "This paper mainly deals with various classification algorithms namely, Bayes. NaiveBayes, Bayes. BayesNet, Bayes. NaiveBayesUpdatable, J48, Randomforest, and Multi Layer Perceptron. It analyzes the hepatitis patients from the UC Irvine machine learning repository. The results of the classification model are accuracy and time. Finally, it concludes that the Naive Bayes performance is better than other classification techniques for hepatitis patients.",
"title": ""
}
] |
[
{
"docid": "05e754e0567bf6859d7a68446fc81bad",
"text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?",
"title": ""
},
{
"docid": "daba02e791922ea8c20ebd22f5e592db",
"text": "For intrinsically diverse tasks, in which collecting extensive information from different aspects of a topic is required, searchers often have difficulty formulating queries to explore diverse aspects and deciding when to stop searching. With the goal of helping searchers discover unexplored aspects and find the appropriate timing for search stopping in intrinsically diverse tasks, we propose ScentBar, a query suggestion interface visualizing the amount of important information that a user potentially misses collecting from the search results of individual queries. We define the amount of missed information for a query as the additional gain that can be obtained from unclicked search results of the query, where gain is formalized as a set-wise metric based on aspect importance, aspect novelty, and per-aspect document relevance and is estimated by using a state-of-the-art algorithm for subtopic mining and search result diversification. Results of a user study involving 24 participants showed that the proposed interface had the following advantages when the gain estimation algorithm worked reasonably: (1) ScentBar users stopped examining search results after collecting a greater amount of relevant information; (2) they issued queries whose search results contained more missed information; (3) they obtained higher gain, particularly at the late stage of their sessions; and (4) they obtained higher gain per unit time. These results suggest that the simple query visualization helps make the search process of intrinsically diverse tasks more efficient, unless inaccurate estimates of missed information are visualized.",
"title": ""
},
{
"docid": "6e4dcb451292cc38cb72300a24135c1b",
"text": "This survey gives state-of-the-art of genetic algorithm (GA) based clustering techniques. Clustering is a fundamental and widely applied method in understanding and exploring a data set. Interest in clustering has increased recently due to the emergence of several new areas of applications including data mining, bioinformatics, web use data analysis, image analysis etc. To enhance the performance of clustering algorithms, Genetic Algorithms (GAs) is applied to the clustering algorithm. GAs are the best-known evolutionary techniques. The capability of GAs is applied to evolve the proper number of clusters and to provide appropriate clustering. This paper present some existing GA based clustering algorithms and their application to different problems and domains.",
"title": ""
},
{
"docid": "a8688afaad32401c6827d48e25750c43",
"text": "We study how to improve the accuracy and running time of top-N recommendation with collaborative filtering (CF). Unlike existing works that use mostly rated items (which is only a small fraction in a rating matrix), we propose the notion of pre-use preferences of users toward a vast amount of unrated items. Using this novel notion, we effectively identify uninteresting items that were not rated yet but are likely to receive very low ratings from users, and impute them as zero. This simple-yet-novel zero-injection method applied to a set of carefully-chosen uninteresting items not only addresses the sparsity problem by enriching a rating matrix but also completely prevents uninteresting items from being recommended as top-N items, thereby improving accuracy greatly. As our proposed idea is method-agnostic, it can be easily applied to a wide variety of popular CF methods. Through comprehensive experiments using the Movielens dataset and MyMediaLite implementation, we successfully demonstrate that our solution consistently and universally improves the accuracies of popular CF methods (e.g., item-based CF, SVD-based CF, and SVD++) by two to five orders of magnitude on average. Furthermore, our approach reduces the running time of those CF methods by 1.2 to 2.3 times when its setting produces the best accuracy. The datasets and codes that we used in experiments are available at: https://goo.gl/KUrmip.",
"title": ""
},
{
"docid": "15518edc9bde13f55df3192262c3a9bf",
"text": "Under the framework of the argumentation scheme theory (Walton, 1996), we developed annotation protocols for an argumentative writing task to support identification and classification of the arguments being made in essays. Each annotation protocol defined argumentation schemes (i.e., reasoning patterns) in a given writing prompt and listed questions to help evaluate an argument based on these schemes, to make the argument structure in a text explicit and classifiable. We report findings based on an annotation of 600 essays. Most annotation categories were applied reliably by human annotators, and some categories significantly contributed to essay score. An NLP system to identify sentences containing scheme-relevant critical questions was developed based on the human annotations.",
"title": ""
},
{
"docid": "1892f3624fa411622440f5ec7914343e",
"text": "Understanding the evolution of research topics is crucial to detect emerging trends in science. This paper proposes a new approach and a framework to discover the evolution of topics based on dynamic co-word networks and communities within them. The NEViewer software was developed according to this approach and framework, as compared to the existing studies and science mapping software tools, our work is innovative in three aspects: (a) the design of a longitudinal framework based on the dynamics of co-word communities; (b) it proposes a community labelling algorithm and community evolution verification algorithms; (c) and visualizes the evolution of topics at the macro and micro level respectively using alluvial diagrams and coloring networks. A case study in computer science and a careful assessment was implemented and demonstrating that the new method and the software NEViewer is feasible and effective.",
"title": ""
},
{
"docid": "1350f4e274947881f4562ab6596da6fd",
"text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.",
"title": ""
},
{
"docid": "934ca8aa2798afd6e7cd4acceeed839a",
"text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.",
"title": ""
},
{
"docid": "457a662fd9928cdb1353ce460cb63422",
"text": "Learning and generating Chinese poems is a charming yet challenging task. Traditional approaches involve various language modeling and machine translation techniques, however, they perform not as well when generating poems with complex pattern constraints, for example Song iambics, a famous type of poems that involve variable-length sentences and strict rhythmic patterns. This paper applies the attention-based sequence-tosequence model to generate Chinese Song iambics. Specifically, we encode the cue sentences by a bi-directional Long-Short Term Memory (LSTM) model and then predict the entire iambic with the information provided by the encoder, in the form of an attention-based LSTM that can regularize the generation process by the fine structure of the input cues. Several techniques are investigated to improve the model, including global context integration, hybrid style training, character vector initialization and adaptation. Both the automatic and subjective evaluation results show that our model indeed can learn the complex structural and rhythmic patterns of Song iambics, and the generation is rather successful.",
"title": ""
},
{
"docid": "b5b45aa1badbda386b12830c78909693",
"text": "BACKGROUND\nThe healthcare industry has become increasingly dependent on using information technology (IT) to manage its daily operations. Unexpected downtime of health IT systems could therefore wreak havoc and result in catastrophic consequences. Little is known, however, regarding the nature of failures of health IT.\n\n\nOBJECTIVE\nTo analyze historical health IT outage incidents as a means to better understand health IT vulnerabilities and inform more effective prevention and emergency response strategies.\n\n\nMETHODS\nWe studied news articles and incident reports publicly available on the internet describing health IT outage events that occurred in China. The data were qualitatively analyzed using a deductive grounded theory approach based on a synthesized IT risk model developed in the domain of information systems.\n\n\nRESULTS\nA total of 116 distinct health IT incidents were identified. A majority of them (69.8%) occurred in the morning; over 50% caused disruptions to the patient registration and payment collection functions of the affected healthcare facilities. The outpatient practices in tertiary hospitals seem to be particularly vulnerable to IT failures. Software defects and overcapacity issues, followed by malfunctioning hardware, were among the principal causes.\n\n\nCONCLUSIONS\nUnexpected health IT downtime occurs more and more often with the widespread adoption of electronic systems in healthcare. Risk identification and risk assessments are essential steps to developing preventive measures. Equally important is institutionalization of contingency plans as our data show that not all failures of health IT can be predicted and thus effectively prevented. The results of this study also suggest significant future work is needed to systematize the reporting of health IT outage incidents in order to promote transparency and accountability.",
"title": ""
},
{
"docid": "ec492f3ca84546c84a9ee8e1992b1baf",
"text": "Sketch is an important media for human to communicate ideas, which reflects the superiority of human intelligence. Studies on sketch can be roughly summarized into recognition and generation. Existing models on image recognition failed to obtain satisfying performance on sketch classification. But for sketch generation, a recent study proposed a sequence-to-sequence variational-auto-encoder (VAE) model called sketch-rnn which was able to generate sketches based on human inputs. The model achieved amazing results when asked to learn one category of object, such as an animal or a vehicle. However, the performance dropped when multiple categories were fed into the model. Here, we proposed a model called sketch-pix2seq which could learn and draw multiple categories of sketches. Two modifications were made to improve the sketch-rnn model: one is to replace the bidirectional recurrent neural network (BRNN) encoder with a convolutional neural network(CNN); the other is to remove the Kullback-Leibler divergence from the objective function of VAE. Experimental results showed that models with CNN encoders outperformed those with RNN encoders in generating human-style sketches. Visualization of the latent space illustrated that the removal of KL-divergence made the encoder learn a posterior of latent space that reflected the features of different categories. Moreover, the combination of CNN encoder and removal of KL-divergence, i.e., the sketchpix2seq model, had better performance in learning and generating sketches of multiple categories and showed promising results in creativity tasks.",
"title": ""
},
{
"docid": "f698eb36fb75c6eae220cf02e41bdc44",
"text": "In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "7d024e9ccf20923ade005970ddef1bcc",
"text": "Mamdani Fuzzy Model is an important technique in Computational Intelligence (CI) study. This paper presents an implementation of a supervised learning method based on membership function training in the context of Mamdani fuzzy models. Specifically, auto zoom function of a digital camera is modelled using Mamdani technique. The performance of control method is verified through a series of simulation and numerical results are provided as illustrations. Keywords-component: Mamdani fuzzy model, fuzzy logic, auto zoom, digital camera",
"title": ""
},
{
"docid": "2a61df18f9d3340d47073cda41da5822",
"text": "Link prediction is one of the fundamental problems in network analysis. In many applications, notably in genetics, a partially observed network may not contain any negative examples of absent edges, which creates a difficulty for many existing supervised learning approaches. We develop a new method which treats the observed network as a sample of the true network with different sampling rates for positive and negative examples. We obtain a relative ranking of potential links by their probabilities, utilizing information on node covariates as well as on network topology. Empirically, the method performs well under many settings, including when the observed network is sparse. We apply the method to a protein-protein interaction network and a school friendship network.",
"title": ""
},
{
"docid": "6bacccbba6bbb4a8d0b6c1de25399fef",
"text": "We propose a novel method to estimate a unique and repeatable reference frame in the context of 3D object recognition from a single viewpoint based on global descriptors. We show that the ability of defining a robust reference frame on both model and scene views allows creating descriptive global representations of the object view, with the beneficial effect of enhancing the spatial descriptiveness of the feature and its ability to recognize objects by means of a simple nearest neighbor classifier computed on the descriptor space. Moreover, the definition of repeatable directions can be deployed to efficiently retrieve the 6DOF pose of the objects in a scene. We experimentally demonstrate the effectiveness of the proposed method on a dataset including 23 scenes acquired with the Microsoft Kinect sensor and 25 full-3D models by comparing the proposed approach with state-of-the-art global descriptors. A substantial improvement is presented regarding accuracy in recognition and 6DOF pose estimation, as well as in terms of computational performance.",
"title": ""
},
{
"docid": "e28438e023fbcbb1c1a7bd2cda3213e1",
"text": "Recent studies provide evidence that Quality of Service (QoS) routing can provide increased network utilization compared to routing that is not sensitive to QoS requirements of traffic. However, there are still strong concerns about the increased cost of QoS routing, both in terms of more complex and frequent computations and increased routing protocol overhead. The main goals of this paper are to study these two cost components, and propose solutions that achieve good routing performance with reduced processing cost. First, we identify the parameters that determine the protocol traffic overhead, namely (a) policy for triggering updates, (b) sensitivity of this policy, and (c) clamp down timers that limit the rate of updates. Using simulation, we study the relative significance of these factors and investigate the relationship between routing performance and the amount of update traffic. In addition, we explore a range of design options to reduce the processing cost of QoS routing algorithms, and study their effect on routing performance. Based on the conclusions of these studies, we develop extensions to the basic QoS routing, that can achieve good routing performance with limited update generation rates. The paper also addresses the impact on the results of a number of secondary factors such as topology, high level admission control, and characteristics of network traffic.",
"title": ""
},
{
"docid": "8474b5b3ed5838e1d038e73579168f40",
"text": "For the first time to the best of our knowledge, this paper provides an overview of millimeter-wave (mmWave) 5G antennas for cellular handsets. Practical design considerations and solutions related to the integration of mmWave phased-array antennas with beam switching capabilities are investigated in detail. To experimentally examine the proposed methodologies, two types of mesh-grid phased-array antennas featuring reconfigurable horizontal and vertical polarizations are designed, fabricated, and measured at the 60 GHz spectrum. Afterward the antennas are integrated with the rest of the 60 GHz RF and digital architecture to create integrated mmWave antenna modules and implemented within fully operating cellular handsets under plausible user scenarios. The effectiveness, current limitations, and required future research areas regarding the presented mmWave 5G antenna design technologies are studied using mmWave 5G system benchmarks.",
"title": ""
},
{
"docid": "a56edeae4520c745003d5cd0baae7708",
"text": "A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.",
"title": ""
},
{
"docid": "7848e4ab59f5789e3290c3ddc32eb4e2",
"text": "We present 3C-GAN: a novel multiple generators structures, that contains one conditional generator that generates a semantic part of an image conditional on its input label, and one context generator generates the rest of an image. Compared to original GAN model, this model has multiple generators and gives control over what its generators should generate. Unlike previous multi-generator models use a subsequent generation process, that one layer is generated given the previous layer, our model uses a process of generating different part of the images together. This way the model contains fewer parameters and the generation speed is faster. Specifically, the model leverages the label information to separate the object from the image correctly. Since the model conditional on the label information does not restrict to generate other parts of an image, we proposed a cost function that encourages the model to generate only the succinct part of an image in terms of label discrimination. We also found an exclusive prior on the mask of the model help separate the object. The experiments on MNIST, SVHN, and CelebA datasets show 3C-GAN can generate different objects with different generators simultaneously, according to the labels given to each generator.",
"title": ""
},
{
"docid": "2f307e10caab050596bc7c081ae95605",
"text": "Motion planning is a fundamental tool in robotics, used to generate collision-free, smooth, trajectories, while satisfying task-dependent constraints. In this paper, we present a novel approach to motion planning using Gaussian processes. In contrast to most existing trajectory optimization algorithms, which rely on a discrete state parameterization in practice, we represent the continuous-time trajectory as a sample from a Gaussian process (GP) generated by a linear time-varying stochastic differential equation. We then provide a gradient-based optimization technique that optimizes continuous-time trajectories with respect to a cost functional. By exploiting GP interpolation, we develop the Gaussian Process Motion Planner (GPMP), that finds optimal trajectories parameterized by a small number of states. We benchmark our algorithm against recent trajectory optimization algorithms by solving 7-DOF robotic arm planning problems in simulation and validate our approach on a real 7-DOF WAM arm.",
"title": ""
}
] |
scidocsrr
|
d87f406d133744ede250d3eb2a722164
|
On the Impact of Touch ID on iPhone Passcodes
|
[
{
"docid": "20563a2f75e074fe2a62a5681167bc01",
"text": "The introduction of a new generation of attractive touch screen-based devices raises many basic usability questions whose answers may influence future design and market direction. With a set of current mobile devices, we conducted three experiments focusing on one of the most basic interaction actions on touch screens: the operation of soft buttons. Issues investigated in this set of experiments include: a comparison of soft button and hard button performance; the impact of audio and vibrato-tactile feedback; the impact of different types of touch sensors on use, behavior, and performance; a quantitative comparison of finger and stylus operation; and an assessment of the impact of soft button sizes below the traditional 22 mm recommendation as well as below finger width.",
"title": ""
},
{
"docid": "b56b90d98b4b1b136e283111e9acf732",
"text": "Mobile phones are widely used nowadays and during the last years developed from simple phones to small computers with an increasing number of features. These result in a wide variety of data stored on the devices which could be a high security risk in case of unauthorized access. A comprehensive user survey was conducted to get information about what data is really stored on the mobile devices, how it is currently protected and if biometric authentication methods could improve the current state. This paper states the results from about 550 users of mobile devices. The analysis revealed a very low securtiy level of the devices. This is partly due to a low security awareness of their owners and partly due to the low acceptance of the offered authentication method based on PIN. Further results like the experiences with mobile thefts and the willingness to use biometric authentication methods as alternative to PIN authentication are also stated.",
"title": ""
}
] |
[
{
"docid": "30bad49dc45651010b49e78951827f6a",
"text": "In this paper we present a case study of frequent surges of unusually high rail-to-earth potential values at Taipei Rapid Transit System. The rail potential values observed and the resulting stray current flow associated with the diode-ground DC traction system during operation are contradictory to the moderate values on which the grounding of the DC traction system design was based. Thus we conducted both theoretical study and field measurements to obtain better understanding of the phenomenon, and to develop a more accurate algorithm for computing the rail-to-earth potential of the diode-ground DC traction systems.",
"title": ""
},
{
"docid": "733e3b25a53a7dc537df94a4cb5e473f",
"text": "Brain activity associated with attention sustained on the task of safe driving has received considerable attention recently in many neurophysiological studies. Those investigations have also accurately estimated shifts in drivers' levels of arousal, fatigue, and vigilance, as evidenced by variations in their task performance, by evaluating electroencephalographic (EEG) changes. However, monitoring the neurophysiological activities of automobile drivers poses a major measurement challenge when using a laboratory-oriented biosensor technology. This work presents a novel dry EEG sensor based mobile wireless EEG system (referred to herein as Mindo) to monitor in real time a driver's vigilance status in order to link the fluctuation of driving performance with changes in brain activities. The proposed Mindo system incorporates the use of a wireless and wearable EEG device to record EEG signals from hairy regions of the driver conveniently. Additionally, the proposed system can process EEG recordings and translate them into the vigilance level. The study compares the system performance between different regression models. Moreover, the proposed system is implemented using JAVA programming language as a mobile application for online analysis. A case study involving 15 study participants assigned a 90 min sustained-attention driving task in an immersive virtual driving environment demonstrates the reliability of the proposed system. Consistent with previous studies, power spectral analysis results confirm that the EEG activities correlate well with the variations in vigilance. Furthermore, the proposed system demonstrated the feasibility of predicting the driver's vigilance in real time.",
"title": ""
},
{
"docid": "fc6e5b83900d87fd5d6eec6d84d47939",
"text": "In this letter, we propose a low complexity linear precoding scheme for downlink multiuser MIMO precoding systems where there is no limit on the number of multiple antennas employed at both the base station and the users. In the proposed algorithm, we can achieve the precoder in two steps. In the first step, we balance the multiuser interference (MUI) and noise by carrying out a novel channel extension approach. In the second step, we further optimize the system performance assuming parallel SU MIMO channels. Simulation results show that the proposed algorithm can achieve elaborate performance while offering lower computational complexity.",
"title": ""
},
{
"docid": "c8b36dd0f892c750f17bc714d177f3d1",
"text": "A scheme for controlling parallel connected inverters in a stand-alone AC supply system is presented. A key feature of this scheme is that it uses only those variables which can be measured locally at the inverter, and does not need communication of control signals between the inverters. This feature is important in high reliability uninterruptible power supply (UPS) systems, and in large DC power sources connected to an AC distribution system. Real and reactive power sharing between inverters can be achieved by controlling two independent quantities at the inverter: the power angle and the fundamental inverter voltage magnitude.<<ETX>>",
"title": ""
},
{
"docid": "ad6bb165620dafb7dcadaca91c9de6b0",
"text": "This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression.",
"title": ""
},
{
"docid": "85e6c9bc6f86560e45276df947db48aa",
"text": "Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire DGX-1 to learn successful strategies in Atari games in mere minutes, using both synchronous and asynchronous algorithms.",
"title": ""
},
{
"docid": "964437f82fc71cd9b3de4d2b70301f85",
"text": "We describe WordSeer, a tool whose goal is to help scholars and analysts discover patterns and formulate and test hypotheses about the contents of text collections, midway between what humanities scholars call a traditional \"close read'' and the new \"distant read\" or \"culturomics\" approach. To this end, WordSeer allows for highly flexible \"slicing and dicing\" (hence \"sliding\") across a text collection. The tool allows users to view text from different angles by selecting subsets of data, viewing those as visualizations, moving laterally to view other subsets of data, slicing into another view, expanding the viewed data by relaxing constraints, and so on. We illustrate the text sliding capabilities of the tool with examples from a case study in the field of humanities and social sciences -- an analysis of how U.S. perceptions of China and Japan changed over the last 30 years.",
"title": ""
},
{
"docid": "cebac1ab25aac9dab853be592cfaa214",
"text": "Enterprise Architecture (EA) is an area within Information Management that deals with the alignment of IT and business in an organization. It is very recent and new discipline emerged in the new millennium as a result of the lack of comprehensive architecture that can describe the relationships among elements of the enterprise encompassing People, Processes, Business and Technology. The main objective of this study is to assess the level of implementation of EA in the designated organization. This study focuses on the four architecture domains listed in The Open Group Architecture Framework (TOGAF) namely: (1)Business Architecture; (2)Data Architecture; (3)Application Architecture; and (4)Technology Architecture. The outcome of this study is a set of guideline of an EA which should help the organization in aligning its business and IT strategy. This study should also benefit those who want to understand more on TOGAF and the implementation of EA.",
"title": ""
},
{
"docid": "5d379223a7204a4074638f0d135ec59a",
"text": "Photovoltaic (PV) is one of the most promising renewable energy sources. To ensure secure operation and economic integration of PV in smart grids, accurate forecasting of PV power is an important issue. In this paper, we propose the use of long short-term memory recurrent neural network (LSTM-RNN) to accurately forecast the output power of PV systems. The LSTM networks can model the temporal changes in PV output power because of their recurrent architecture and memory units. The proposed method is evaluated using hourly datasets of different sites for a year. We compare the proposed method with three PV forecasting methods. The use of LSTM offers a further reduction in the forecasting error compared with the other methods. The proposed forecasting method can be a helpful tool for planning and controlling smart grids.",
"title": ""
},
{
"docid": "615dbb03f31acfce971a383fa54d7d12",
"text": "Objectives\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTarget Audience\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nScope\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains.",
"title": ""
},
{
"docid": "7208a2b257c7ba7122fd2e278dd1bf4a",
"text": "Abstract—This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.",
"title": ""
},
{
"docid": "709c06739d20fe0a5ba079b21e5ad86d",
"text": "Bug triaging refers to the process of assigning a bug to the most appropriate developer to fix. It becomes more and more difficult and complicated as the size of software and the number of developers increase. In this paper, we propose a new framework for bug triaging, which maps the words in the bug reports (i.e., the term space) to their corresponding topics (i.e., the topic space). We propose a specialized topic modeling algorithm named <italic> multi-feature topic model (MTM)</italic> which extends Latent Dirichlet Allocation (LDA) for bug triaging. <italic>MTM </italic> considers product and component information of bug reports to map the term space to the topic space. Finally, we propose an incremental learning method named <italic>TopicMiner</italic> which considers the topic distribution of a new bug report to assign an appropriate fixer based on the affinity of the fixer to the topics. We pair <italic> TopicMiner</italic> with MTM (<italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math> <alternatives><inline-graphic xlink:href=\"xia-ieq1-2576454.gif\"/></alternatives></inline-formula></italic>). We have evaluated our solution on 5 large bug report datasets including GCC, OpenOffice, Mozilla, Netbeans, and Eclipse containing a total of 227,278 bug reports. We show that <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\"> $^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq2-2576454.gif\"/></alternatives></inline-formula> </italic> can achieve top-1 and top-5 prediction accuracies of 0.4831-0.6868, and 0.7686-0.9084, respectively. We also compare <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives> <inline-graphic xlink:href=\"xia-ieq3-2576454.gif\"/></alternatives></inline-formula></italic> with Bugzie, LDA-KL, SVM-LDA, LDA-Activity, and Yang et al.'s approach. The results show that <italic>TopicMiner<inline-formula> <tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq4-2576454.gif\"/> </alternatives></inline-formula></italic> on average improves top-1 and top-5 prediction accuracies of Bugzie by 128.48 and 53.22 percent, LDA-KL by 262.91 and 105.97 percent, SVM-LDA by 205.89 and 110.48 percent, LDA-Activity by 377.60 and 176.32 percent, and Yang et al.'s approach by 59.88 and 13.70 percent, respectively.",
"title": ""
},
{
"docid": "b94e461c6ac7883b9cf7123e58d04ae0",
"text": "a r t i c l e i n f o We introduce the term \" enclothed cognition \" to describe the systematic influence that clothes have on the wearer's psychological processes. We offer a potentially unifying framework to integrate past findings and capture the diverse impact that clothes can have on the wearer by proposing that enclothed cognition involves the co-occurrence of two independent factors—the symbolic meaning of the clothes and the physical experience of wearing them. As a first test of our enclothed cognition perspective, the current research explored the effects of wearing a lab coat. A pretest found that a lab coat is generally associated with atten-tiveness and carefulness. We therefore predicted that wearing a lab coat would increase performance on attention-related tasks. In Experiment 1, physically wearing a lab coat increased selective attention compared to not wearing a lab coat. In Experiments 2 and 3, wearing a lab coat described as a doctor's coat increased sustained attention compared to wearing a lab coat described as a painter's coat, and compared to simply seeing or even identifying with a lab coat described as a doctor's coat. Thus, the current research suggests a basic principle of enclothed cognition—it depends on both the symbolic meaning and the physical experience of wearing the clothes. \" What a strange power there is in clothing. \" ~Isaac Bashevis Singer Nobel Prize winning author Isaac Bashevis Singer asserts that the clothes we wear hold considerable power and sway. In line with this assertion, bestselling books such as Dress for Success by John T. Molloy and TV shows like TLC's What Not to Wear emphasize the power that clothes can have over others by creating favorable impressions. Indeed, a host of research has documented the effects that people's clothes have on the perceptions and reactions of others. High school students' clothing styles influence perceptions of academic prowess among peers and teachers (Behling & Williams, 1991). Teaching assistants who wear formal clothes are perceived as more intelligent, but as less interesting than teaching assistants who wear less formal clothes (Morris, Gorham, Cohen, & Huffman, 1996). When women dress in a masculine fashion during a recruitment interview, they are more likely to be hired (Forsythe, 1990), and when they dress sexily in prestigious jobs, they are perceived as less competent (Glick, Larsen, Johnson, & Branstiter, 2005). Clients are more likely to return to formally dressed therapists …",
"title": ""
},
{
"docid": "277e738fde3fea142ff93497d0065b10",
"text": "To construct a diversified search test collection, a set of possible subtopics (or intents) needs to be determined for each topic, in one way or another, and perintent relevance assessments need to be obtained. In the TREC Web Track Diversity Task, subtopics are manually developed at NIST, based on results of automatic click log analysis; in the NTCIR INTENT Task, intents are determined by manually clustering 'subtopics strings' returned by participating systems. In this study, we address the following research question: Does the choice of intents for a test collection affect relative performances of diversified search systems? To this end, we use the TREC 2012 Web Track Diversity Task data and the NTCIR-10 INTENT-2 Task data, which share a set of 50 topics but have different intent sets. Our initial results suggest that the choice of intents may affect relative performances, and that this choice may be far more important than how many intents are selected for each topic",
"title": ""
},
{
"docid": "003d004f57d613ff78bf39a35e788bf9",
"text": "Breast cancer is one of the most common cancer in women worldwide. It is typically diagnosed via histopathological microscopy imaging, for which image analysis can aid physicians for more effective diagnosis. Given a large variability in tissue appearance, to better capture discriminative traits, images can be acquired at different optical magnifications. In this paper, we propose an approach which utilizes joint colour-texture features and a classifier ensemble for classifying breast histopathology images. While we demonstrate the effectiveness of the proposed framework, an important objective of this work is to study the image classification across different optical magnification levels. We provide interesting experimental results and related discussions, demonstrating a visible classification invariance with cross-magnification training-testing. Along with magnification-specific model, we also evaluate the magnification independent model, and compare the two to gain some insights.",
"title": ""
},
{
"docid": "0bcc5beb8bada39446c1dd32d0a65dec",
"text": "Clustering is a powerful tool in data analysis, but it is often difficult to find a grouping that aligns with a user’s needs. To address this, several methods incorporate constraints obtained from users into clustering algorithms, but unfortunately do not apply to hierarchical clustering. We design an interactive Bayesian algorithm that incorporates user interaction into hierarchical clustering while still utilizing the geometry of the data by sampling a constrained posterior distribution over hierarchies. We also suggest several ways to intelligently query a user. The algorithm, along with the querying schemes, shows promising results on real data.",
"title": ""
},
{
"docid": "3fd9fd52be3153fe84f2ea6319665711",
"text": "The theories of supermodular optimization and games provide a framework for the analysis of systems marked by complementarity. We summarize the principal results of these theories and indicate their usefulness by applying them to study the shift to 'modern manufacturing'. We also use them to analyze the characteristic features of the Lincoln Electric Company's strategy and structure.",
"title": ""
},
{
"docid": "a45c93e89cc3df3ebec59eb0c81192ec",
"text": "We study a variant of the capacitated vehicle routing problem where the cost over each arc is defined as the product of the arc length and the weight of the vehicle when it traverses that arc. We propose two new mixed integer linear programming formulations for the problem: an arc-load formulation and a set partitioning formulation based on q-routes with additional constraints. A family of cycle elimination constraints are derived for the arc-load formulation. We then compare the linear programming (LP) relaxations of these formulations with the twoindex one-commodity flow formulation proposed in the literature. In particular, we show that the arc-load formulation with the new cycle elimination constraints gives the same LP bound as the set partitioning formulation based on 2-cycle-free q-routes, which is stronger than the LP bound given by the two-index one-commodity flow formulation. We propose a branchand-cut algorithm for the arc-load formulation, and a branch-cut-and-price algorithm for the set partitioning formulation strengthened by additional constraints. Computational results on instances from the literature demonstrate that a significant improvement can be achieved by the branch-cut-and-price algorithm over other methods.",
"title": ""
},
{
"docid": "97968acf486f3f4bcdbccdfcd116dabb",
"text": "Disruption of electric power operations can be catastrophic on national security and the economy. Due to the complexity of widely dispersed assets and the interdependences among computer, communication, and power infrastructures, the requirement to meet security and quality compliance on operations is a challenging issue. In recent years, the North American Electric Reliability Corporation (NERC) established a cybersecurity standard that requires utilities' compliance on cybersecurity of control systems. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). In this paper, a comprehensive survey on cybersecurity of critical infrastructures is reported. A supervisory control and data acquisition security framework with the following four major components is proposed: (1) real-time monitoring; (2) anomaly detection; (3) impact analysis; and (4) mitigation strategies. In addition, an attack-tree-based methodology for impact analysis is developed. The attack-tree formulation based on power system control networks is used to evaluate system-, scenario -, and leaf-level vulnerabilities by identifying the system's adversary objectives. The leaf vulnerability is fundamental to the methodology that involves port auditing or password strength evaluation. The measure of vulnerabilities in the power system control framework is determined based on existing cybersecurity conditions, and then, the vulnerability indices are evaluated.",
"title": ""
},
{
"docid": "60da71841669948e0a57ba4673693791",
"text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.",
"title": ""
}
] |
scidocsrr
|
7499476ab60378a53aa9ef1585b520c4
|
From Temporary Competitive Advantage to Sustainable Competitive Advantage
|
[
{
"docid": "7a9c163d5efbe1bf1d7178bb5d7116a0",
"text": "This paper examines interfirm knowledge transfers within strategic alliances. Using a new measure of changes in alliance partners' technological capabilities, based on the citation patterns of their patent portfolios. we analyze changes in the extent to which partner firms' technological resources 'overlap' as a result of alliance participation. This measure allows us to test hypothesesfrom the literature on interfirm knowledge transfer in alliances, with interesting results: we find support for some elements of this 'received wisdom'-equity arrangements promote greater knowledge transfer, and 'absorptive capacity' helps explain the extent of technological capability transfer, at least in some alliances. But the results also suggest limits to the 'capabilities acquisition' view of strategic alliances. Consistent with the argument that alliance activity can promote increased specialization, we find that the capabilities of partner firms become more divergent in a substantial subset of alliances.",
"title": ""
}
] |
[
{
"docid": "007a42bdf781074a2d00d792d32df312",
"text": "This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.",
"title": ""
},
{
"docid": "485484dcbb0113e9971ad4d37802cf59",
"text": "Due to the rise of businesses utilizing the sharing economy concept, it is important to better understand the motivational factors that drive and hinder collaborative consumption in the travel and tourism marketplace. Based on responses from 754 adult travellers residing in the US, drivers and deterrents of the use of peer-to-peer accommodation rental services were identified. Factors that deter the use of peer-to-peer accommodation rental services include lack of trust, lack of efficacy with regards to technology, and lack of economic benefits. The motivations that drive the use of peer-to-peer accommodation include the societal aspects of sustainability and community, as well as economic benefits. Based on the empirical evidence, this study suggests several propositions for future studies and implications for tourism destinations and hospitality businesses on how to manage collaborative consumption.",
"title": ""
},
{
"docid": "7a7b8d92cea993b3d2794f43eb8e448d",
"text": "This article investigates the impact of user homophily on the social process of information diffusion in online social media. Over several decades, social scientists have been interested in the idea that similarity breeds connection—precisely known as “homophily”. “Homophily”, has been extensively studied in the social sciences and refers to the idea that users in a social system tend to bond more with ones who are “similar” to them than to ones who are dissimilar. The key observation is that homophily structures the ego-networks of individuals and impacts their communication behavior. It is therefore likely to effect the mechanisms in which information propagates among them. To this effect, we investigate the interplay between homophily along diverse user attributes and the information diffusion process on social media. Our approach has three steps. First we extract several diffusion characteristics along categories such as user-based (volume, number of seeds), topology-based (reach, spread) and time (rate)—corresponding to the baseline social graph as well as graphs filtered on different user attributes (e.g. location, activity behavior). Second, we propose a Dynamic Bayesian Network based framework to predict diffusion characteristics at a future time slice. Third, the impact of attribute homophily is quantified by the ability of the predicted characteristics in explaining actual diffusion, and external temporal variables, including trends in search and news. Experimental results on a large Twitter dataset are promising and demonstrate that the choice of the homophilous attribute can impact the prediction of information diffusion, given a specific metric and a topic. In most cases, attribute homophily is able to explain the actual diffusion and external trends by ∼ 15 − 25% over cases when homophily is not considered. Our method also outperforms baseline techniques in predicting diffusion characteristics subject to homophily, by ∼ 13 − 50%. ∗School of Computing, Informatics & Decision Systems Engineering, Arizona State University, Tempe, Arizona, USA. (munmun@asu.edu). †School of Arts, Media & Engineering, Arizona State University, Tempe, Arizona, USA. (hari.sundaram@asu.edu). ‡Collaborative Applications Research, Avaya Labs Research, Basking Ridge, New Jersey, USA. (ajita@avaya.com). §Collaborative Applications Research, Avaya Labs Research, Basking Ridge, New Jersey, USA. (doree@avaya.com). ¶School of Arts, Media & Engineering, Arizona State University, Tempe, Arizona, USA. (aisling.kelliher@asu.edu). 1 ar X iv :1 00 6. 17 02 v1 [ cs .C Y ] 9 J un 2 01 0",
"title": ""
},
{
"docid": "be7d32aeffecc53c5d844a8f90cd5ce0",
"text": "Wordnets play a central role in many natural language processing tasks. This paper introduces a multilingual editing system for the Open Multilingual Wordnet (OMW: Bond and Foster, 2013). Wordnet development, like most lexicographic tasks, is slow and expensive. Moving away from the original Princeton Wordnet (Fellbaum, 1998) development workflow, wordnet creation and expansion has increasingly been shifting towards an automated and/or interactive system facilitated task. In the particular case of human edition/expansion of wordnets, a few systems have been developed to aid the lexicographers’ work. Unfortunately, most of these tools have either restricted licenses, or have been designed with a particular language in mind. We present a webbased system that is capable of multilingual browsing and editing for any of the hundreds of languages made available by the OMW. All tools and guidelines are freely available under an open license.",
"title": ""
},
{
"docid": "8ee0764d45e512bfc6b0273f7e90d2c1",
"text": "This work introduces a new dataset and framework for the exploration of topological data analysis (TDA) techniques applied to time-series data. We examine the end-toend TDA processing pipeline for persistent homology applied to time-delay embeddings of time series – embeddings that capture the underlying system dynamics from which time series data is acquired. In particular, we consider stability with respect to time series length, the approximation accuracy of sparse filtration methods, and the discriminating ability of persistence diagrams as a feature for learning. We explore these properties across a wide range of time-series datasets spanning multiple domains for single source multi-segment signals as well as multi-source single segment signals. Our analysis and dataset captures the entire TDA processing pipeline and includes time-delay embeddings, persistence diagrams, topological distance measures, as well as kernels for similarity learning and classification tasks for a broad set of time-series data sources. We outline the TDA framework and rationale behind the dataset and provide insights into the role of TDA for time-series analysis as well as opportunities for new work.",
"title": ""
},
{
"docid": "025953bb13772965bd757216f58d2bed",
"text": "Designers use third-party intellectual property (IP) cores and outsource various steps in their integrated circuit (IC) design flow, including fabrication. As a result, security vulnerabilities have been emerging, forcing IC designers and end-users to reevaluate their trust in hardware. If an attacker gets hold of an unprotected design, attacks such as reverse engineering, insertion of malicious circuits, and IP piracy are possible. In this paper, we shed light on the vulnerabilities in very large scale integration (VLSI) design and fabrication flow, and survey design-for-trust (DfTr) techniques that aim at regaining trust in IC design. We elaborate on four DfTr techniques: logic encryption, split manufacturing, IC camouflaging, and Trojan activation. These techniques have been developed by reusing VLSI test principles.",
"title": ""
},
{
"docid": "a90909570959ade87dd46186a0990a9e",
"text": "DNA methylation is among the best studied epigenetic modifications and is essential to mammalian development. Although the methylation status of most CpG dinucleotides in the genome is stably propagated through mitosis, improvements to methods for measuring methylation have identified numerous regions in which it is dynamically regulated. In this Review, we discuss key concepts in the function of DNA methylation in mammals, stemming from more than two decades of research, including many recent studies that have elucidated when and where DNA methylation has a regulatory role in the genome. We include insights from early development, embryonic stem cells and adult lineages, particularly haematopoiesis, to highlight the general features of this modification as it participates in both global and localized epigenetic regulation.",
"title": ""
},
{
"docid": "43ac7e674624615c9906b2bd58b72b7b",
"text": "OBJECTIVE\nTo develop a method enabling human-like, flexible supervisory control via delegation to automation.\n\n\nBACKGROUND\nReal-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance.\n\n\nMETHOD\nWe review problems with static and adaptive (as opposed to \"adaptable\") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a \"level of automation\" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described.\n\n\nRESULTS\nOn the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's \"playbook.\" Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles.\n\n\nCONCLUSION\nDelegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level.\n\n\nAPPLICATION\nMost applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.",
"title": ""
},
{
"docid": "4f5e3933100a8dcec75ceb058faaa481",
"text": "Reinforced Concrete Frames are the most commonly adopted buildings construction practices in India. With growing economy, urbanisation and unavailability of horizontal space increasing cost of land and need for agricultural land, high-rise sprawling structures have become highly preferable in Indian buildings scenario, especially in urban. With high-rise structures, not only the building has to take up gravity loads, but as well as lateral forces. Many important Indian cities fall under high risk seismic zones, hence strengthening of buildings for lateral forces is a prerequisite. In this study the aim is to analyze the response of a high-rise structure to ground motion using Response Spectrum Analysis. Different models, that is, bare frame, brace frame and shear wall frame are considered in Staad Pro. and change in the time period, stiffness, base shear, storey drifts and top-storey deflection of the building is observed and compared.",
"title": ""
},
{
"docid": "b7dfec026a9fe18eb2cd8bdfd6cfa416",
"text": "Based on the hypothesis that frame-semantic parsing and event extraction are structurally identical tasks, we retrain SEMAFOR, a stateof-the-art frame-semantic parsing system to predict event triggers and arguments. We describe how we change SEMAFOR to be better suited for the new task and show that it performs comparable to one of the best systems in event extraction. We also describe a bias in one of its models and propose a feature factorization which is better suited for this model.",
"title": ""
},
{
"docid": "a252ec33139d9489133b91c2551a694f",
"text": "The lucrative rewards of security penetrations into large organizations have motivated the development and use of many sophisticated rootkit techniques to maintain an attacker's presence on a compromised system. Due to the evasive nature of such infections, detecting these rootkit infestations is a problem facing modern organizations. While many approaches to this problem have been proposed, various drawbacks that range from signature generation issues, to coverage, to performance, prevent these approaches from being ideal solutions.\n In this paper, we present Blacksheep, a distributed system for detecting a rootkit infestation among groups of similar machines. This approach was motivated by the homogenous natures of many corporate networks. Taking advantage of the similarity amongst the machines that it analyses, Blacksheep is able to efficiently and effectively detect both existing and new infestations by comparing the memory dumps collected from each host.\n We evaluate Blacksheep on two sets of memory dumps. One set is taken from virtual machines using virtual machine introspection, mimicking the deployment of Blacksheep on a cloud computing provider's network. The other set is taken from Windows XP machines via a memory acquisition driver, demonstrating Blacksheep's usage under more challenging image acquisition conditions. The results of the evaluation show that by leveraging the homogeneous nature of groups of computers, it is possible to detect rootkit infestations.",
"title": ""
},
{
"docid": "6f9be23e33910d44551b5befa219e557",
"text": "The Lecture Notes are used for the a short course on the theory and applications of the lattice Boltzmann methods for computational uid dynamics taugh by the author at Institut f ur Computeranwendungen im Bauingenieurwesen (CAB), Technischen Universitat Braunschweig, during August 7 { 12, 2003. The lectures cover the basic theory of the lattice Boltzmann equation and its applications to hydrodynamics. Lecture One brie y reviews the history of the lattice gas automata and the lattice Boltzmann equation and their connections. Lecture Two provides an a priori derivation of the lattice Boltzmann equation, which connects the lattice Boltzmann equation to the continuous Boltzmann equation and demonstrates that the lattice Boltzmann equation is indeed a special nite di erence form of the Boltzmann equation. Lecture Two also includes the derivation of the lattice Boltzmann model for nonideal gases from the Enskog equation for dense gases. Lecture Three studies the generalized lattice Boltzmann equation with multiple relaxation times. A summary is provided at the end of each Lecture. Lecture Four discusses the uid-solid boundary conditions in the lattice Boltzmann methods. Applications of the lattice Boltzmann mehod to particulate suspensions, turbulence ows, and other ows are also shown. An Epilogue on the rationale of the lattice Boltzmann method is given. Some key references in the literature is also provided.",
"title": ""
},
{
"docid": "068381a40679de50f0a8cdb4be50a2a2",
"text": "The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.",
"title": ""
},
{
"docid": "830abfc28745f469cd24bb730111afcb",
"text": "User interface (UI) is point of interaction between user and computer software. The success and failure of a software application depends on User Interface Design (UID). Possibility of using a software, easily using and learning are issues influenced by UID. The UI is significant in designing of educational software (e-Learning). Principles and concepts of learning should be considered in addition to UID principles in UID for e-learning. In this regard, to specify the logical relationship between education, learning, UID and multimedia at first we readdress the issues raised in previous studies. It is followed by examining the principle concepts of e-learning and UID. Then, we will see how UID contributes to e-learning through the educational software built by authors. Also we show the way of using UI to improve learning and motivating the learners and to improve the time efficiency of using e-learning software. Keywords—e-Learning, User Interface Design, Self learning, Educational Multimedia",
"title": ""
},
{
"docid": "7c5f1b12f540c8320587ead7ed863ee5",
"text": "This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.",
"title": ""
},
{
"docid": "8d08a464c75a8da6de159c0f0e46d447",
"text": "A License plate recognition (LPR) system can be divided into the following steps: preprocessing, plate region extraction, plate region thresholding, character segmentation, character recognition and post-processing. For step 2, a combination of color and shape information of plate is used and a satisfactory extraction result is achieved. For step 3, first channel is selected, then threshold is computed and finally the region is thresholded. For step 4, the character is segmented along vertical, horizontal direction and some tentative optimizations are applied. For step 5, minimum Euclidean distance based template matching is used. And for those confusing characters such as '8' & 'B' and '0' & 'D', a special processing is necessary. And for the final step, validity is checked by machine and manual. The experiment performed by program based on aforementioned algorithms indicates that our LPR system based on color image processing is quite quick and accurate.",
"title": ""
},
{
"docid": "a6499aad878777373006742778145ddb",
"text": "The very term 'Biotechnology' elicits a range of emotions, from wonder and awe to downright fear and hostility. This is especially true among non-scientists, particularly in respect of agricultural and food biotechnology. These emotions indicate just how poorly understood agricultural biotechnology is and the need for accurate, dispassionate information in the public sphere to allow a rational public debate on the actual, as opposed to the perceived, risks and benefits of agricultural biotechnology. This review considers first the current state of public knowledge on agricultural biotechnology, and then explores some of the popular misperceptions and logical inconsistencies in both Europe and North America. I then consider the problem of widespread scientific illiteracy, and the role of the popular media in instilling and perpetuating misperceptions. The impact of inappropriate efforts to provide 'balance' in a news story, and of belief systems and faith also impinges on public scientific illiteracy. Getting away from the abstract, we explore a more concrete example of the contrasting approach to agricultural biotechnology adoption between Europe and North America, in considering divergent approaches to enabling coexistence in farming practices. I then question who benefits from agricultural biotechnology. Is it only the big companies, or is it society at large--and the environment--also deriving some benefit? Finally, a crucial aspect in such a technologically complex issue, ordinary and intelligent non-scientifically trained consumers cannot be expected to learn the intricacies of the technology to enable a personal choice to support or reject biotechnology products. The only reasonable and pragmatic alternative is to place trust in someone to provide honest advice. But who, working in the public interest, is best suited to provide informed and accessible, but objective, advice to wary consumers?",
"title": ""
},
{
"docid": "6522a164502dbefa1e915dacc53e8a94",
"text": "Whilst the future for social media in chronic disease management appears to be optimistic, there is limited concrete evidence indicating whether and how social media use significantly improves patient outcomes. This review examines the health outcomes and related effects of using social media, while also exploring the unique affordances underpinning these effects. Few studies have investigated social media's potential in chronic disease, but those we found indicate impact on health status and other effects are positive, with none indicating adverse events. Benefits have been reported for psychosocial management via the ability to foster support and share information; however, there is less evidence of benefits for physical condition management. We found that studies covered a very limited range of social media platforms and that there is an ongoing propensity towards reporting investigations of earlier social platforms, such as online support groups (OSG), discussion forums and message boards. Finally, it is hypothesized that for social media to form a more meaningful part of effective chronic disease management, interventions need to be tailored to the individualized needs of sufferers. The particular affordances of social media that appear salient in this regard from analysis of the literature include: identity, flexibility, structure, narration and adaptation. This review suggests further research of high methodological quality is required to investigate the affordances of social media and how these can best serve chronic disease sufferers. Evidence-based practice (EBP) using social media may then be considered.",
"title": ""
},
{
"docid": "706bf586392b754863060542cbd77fa3",
"text": "SAX (Symbolic Aggregate approXimation) is one of the main symbolization technique for time series. A well-known limitation of SAX is that trends are not taken into account in the symbolization. This paper proposes 1d-SAX a method to represent a time series as a sequence of symbols that contain each an information about the average and the trend of the series on a segment. We compare the efficiency of SAX and 1d-SAX in terms of i) goodness-of-fit and ii) retrieval performance for querying a time series database with an asymmetric scheme. The results show that 1d-SAX improves retrieval performance using equal quantity of information, especially when the compression rate increases.",
"title": ""
},
{
"docid": "29199ac45d4aa8035fd03e675406c2cb",
"text": "This work presents an autonomous mobile robot in order to cover an unknown terrain “randomly”, namely entirely, unpredictably and evenly. This aim is very important, especially in military missions, such as the surveillance of terrains, the terrain exploration for explosives and the patrolling for intrusion in military facilities. The “heart” of the proposed robot is a chaotic motion controller, which is based on a chaotic true random bit generator. This generator has been implemented with a microcontroller, which converts the produced chaotic bit sequence, to the robot's motion. Experimental results confirm that this approach, with an appropriate sensor for obstacle avoidance, can obtain very satisfactory results in regard to the fast scanning of the robot’s workspace with unpredictable way. Key-Words: Autonomous mobile robot, terrain coverage, microcontroller, random bit generator, nonlinear system, chaos, Logistic map.",
"title": ""
}
] |
scidocsrr
|
f400e12bd55177a08fd01b717a6787f1
|
Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning
|
[
{
"docid": "775e3aa5bd4991f227d239e01faf7fad",
"text": "We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules.",
"title": ""
},
{
"docid": "c879ee3945592f2e39bb3306602bb46a",
"text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.",
"title": ""
},
{
"docid": "707f4e77afa200a38a5db593e5069689",
"text": "Bidirectional recurrent neural networks (RNN) are trained to predict both in the positive and negative time directions simultaneously. They have not been used commonly in unsupervised tasks, because a probabilistic interpretation of the model has been difficult. Recently, two different frameworks, GSN and NADE, provide a connection between reconstruction and probabilistic modeling, which makes the interpretation possible. As far as we know, neither GSN or NADE have been studied in the context of time series before. As an example of an unsupervised task, we study the problem of filling in gaps in high-dimensional time series with complex dynamics. Although unidirectional RNNs have recently been trained successfully to model such time series, inference in the negative time direction is non-trivial. We propose two probabilistic interpretations of bidirectional RNNs that can be used to reconstruct missing gaps efficiently. Our experiments on text data show that both proposed methods are much more accurate than unidirectional reconstructions, although a bit less accurate than a computationally complex bidirectional Bayesian inference on the unidirectional RNN. We also provide results on music data for which the Bayesian inference is computationally infeasible, demonstrating the scalability of the proposed methods.",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
}
] |
[
{
"docid": "78ffcec1e3d5164d7360aa8a93848fc4",
"text": "During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.",
"title": ""
},
{
"docid": "1a154992369fc30c36613fc811df53ac",
"text": "Speech recognition is a subjective phenomenon. Despite being a huge research in this field, this process still faces a lot of problem. Different techniques are used for different purposes. This paper gives an overview of speech recognition process. Various progresses have been done in this field. In this work of project, it is shown that how the speech signals are recognized using back propagation algorithm in neural network. Voices of different persons of various ages",
"title": ""
},
{
"docid": "d038c7b29701654f8ee908aad395fe8c",
"text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.",
"title": ""
},
{
"docid": "ffede4ad022d6ea4006c2e123807e89f",
"text": "Awareness about the energy consumption of appliances can help to save energy in households. Non-intrusive Load Monitoring (NILM) is a feasible approach to provide consumption feedback at appliance level. In this paper, we evaluate a broad set of features for electrical appliance recognition, extracted from high frequency start-up events. These evaluations were applied on several existing high frequency energy datasets. To examine clean signatures, we ran all experiments on two datasets that are based on isolated appliance events; more realistic results were retrieved from two real household datasets. Our feature set consists of 36 signatures from related work including novel approaches, and from other research fields. The results of this work include a stand-alone feature ranking, promising feature combinations for appliance recognition in general and appliance-wise performances.",
"title": ""
},
{
"docid": "364124b0bc3a2af0e1a7a837a4344f55",
"text": "We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of Adrian E. Raftery is Professor of Statistics and Sociology, David Madigan is Assistant Professor of Statistics, both at the Department of Statistics,University of Washington, Box 354322, Seattle, WA 98195-4322. Jennifer Hoeting is Assistant Professor of Statistics at the Department of Statistics, Colorado State University, Fort Collins, CO 80523. The research of Raftery and Hoeting was partially supported by ONR Contract N-00014-91-J-1074. Madigan's research was partially supported by NSF grant no. DMS 92111627. The authors are grateful to Danika Lew for research assistance and the Editor, the Associate Editor, two anonymous referees and David Draper for very helpful comments that greatly improved the article.",
"title": ""
},
{
"docid": "ce32d9fb88faa8730ebec0811c625a35",
"text": "With the explosive growth of audio music everywhere over the Internet, it is becoming more important to be able to classify or retrieve audio music based on their key components, such as vocal pitch for common popular music. This paper proposes a novel and effective two-stage approach to singing pitch extraction, which involves singing voice separation and pitch tracking for monaural polyphonic audio music. The first stage extracts singing voice from the songs by using deep neural networks in a supervised setting. Then the second stage estimates the pitch based on the extracted singing voice in a robust manner. Experimental results based on MIR-1K showed that the proposed approach outperforms a previous state-of-the-art approach in raw-pitch accuracy. Moreover, the proposed approach has been submitted to the singing voice separation and audio melody extraction tasks of Music Information Retrieval Evaluation eXchange (MIREX) in 2015. The results of the competition shows that the proposed approach is superior to other submitted algorithms, which demonstrates the feasibility of the method for further applications in music processing.",
"title": ""
},
{
"docid": "12fa7a50132468598cf20ac79f51b540",
"text": "As medical organizations modernize their operations, they are increasingly adopting electronic health records (EHRs) and deploying new health information technology systems that create, gather, and manage their information. As a result, the amount of data available to clinicians, administrators, and researchers in the healthcare system continues to grow at an unprecedented rate. However, despite the substantial evidence showing the benefits of EHR adoption, e-prescriptions, and other components of health information exchanges, healthcare providers often report only modest improvements in their ability to make better decisions by using more comprehensive clinical information. The large volume of clinical data now being captured for each patient poses many challenges to (a) clinicians trying to combine data from different disparate systems and make sense of the patient’s condition within the context of the patient’s medical history, (b) administrators trying to make decisions grounded in data, (c) researchers trying to understand differences in population outcomes, and (d) patients trying to make use of their own medical data. In fact, despite the many hopes that access to more information would lead to more informed decisions, access to comprehensive and large-scale clinical data resources has instead made some analytical processes even more difficult. Visual analytics is an emerging discipline that has shown significant promise in addressing many of these information overload challenges. Visual analytics is the science of analytical reasoning facilitated by advanced interactive visual interfaces. In order to facilitate reasoning over, and interpretation of, complex data, visual analytics techniques combine concepts from data mining, machine learning, human computing interaction, and human cognition. As the volume of healthrelated data continues to grow at unprecedented rates and new information systems are deployed to those already overrun with too much data, there is a need for exploring how visual analytics methods can be used to avoid information overload. Information overload is the problem that arises when individuals try to analyze a number of variables that surpass the limits of human cognition. Information overload often leads to users ignoring, overlooking, or misinterpreting crucial information. The information overload problem is widespread in the healthcare domain and can result in incorrect interpretations of data, wrong diagnoses, and missed warning signs of impending changes to patient conditions. The multi-modal and heterogeneous properties of EHR data together with the frequency of redundant, irrelevant, and subjective measures pose significant challenges to users trying to synthesize the information and obtain actionable insights. Yet despite these challenges, the promise of big data in healthcare remains. There is a critical need to support research and pilot projects to study effective ways of using visual analytics to support the analysis of large amounts of medical data. Currently new interactive interfaces are being developed to unlock the value of large-scale clinical databases for a wide variety of different tasks. For instance, visual analytics could help provide clinicians with more effective ways to combine the longitudinal clinical data with the patient-generated health data to better understand patient progression. Patients could be supported in understanding personalized wellness plans and comparing their health measurements against similar patients. Researchers could use visual analytics tools to help perform population-based analysis and obtain insights from large amounts of clinical data. Hospital administrators could use visual analytics to better understand the productivity of an organization, gaps in care, outcomes measurements, and patient satisfaction. Visual analytics systems—by combining advanced interactive visualization methods with statistical inference and correlation models—have the potential to support intuitive analysis for all of these user populations while masking the underlying complexity of the data. This special focus issue of JAMIA is dedicated to new research, applications, case studies, and approaches that use visual analytics to support the analysis of complex clinical data.",
"title": ""
},
{
"docid": "4290b4ba8000aeaf24cd7fb8640b4570",
"text": "Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.",
"title": ""
},
{
"docid": "332db7a0d5bf73f65e55c6f2e97dd22c",
"text": "The knowledge of surface electromyography (SEMG) and the number of applications have increased considerably during the past ten years. However, most methodological developments have taken place locally, resulting in different methodologies among the different groups of users.A specific objective of the European concerted action SENIAM (surface EMG for a non-invasive assessment of muscles) was, besides creating more collaboration among the various European groups, to develop recommendations on sensors, sensor placement, signal processing and modeling. This paper will present the process and the results of the development of the recommendations for the SEMG sensors and sensor placement procedures. Execution of the SENIAM sensor tasks, in the period 1996-1999, has been handled in a number of partly parallel and partly sequential activities. A literature scan was carried out on the use of sensors and sensor placement procedures in European laboratories. In total, 144 peer-reviewed papers were scanned on the applied SEMG sensor properties and sensor placement procedures. This showed a large variability of methodology as well as a rather insufficient description. A special workshop provided an overview on the scientific and clinical knowledge of the effects of sensor properties and sensor placement procedures on the SEMG characteristics. Based on the inventory, the results of the topical workshop and generally accepted state-of-the-art knowledge, a first proposal for sensors and sensor placement procedures was defined. Besides containing a general procedure and recommendations for sensor placement, this was worked out in detail for 27 different muscles. This proposal was evaluated in several European laboratories with respect to technical and practical aspects and also sent to all members of the SENIAM club (>100 members) together with a questionnaire to obtain their comments. Based on this evaluation the final recommendations of SENIAM were made and published (SENIAM 8: European recommendations for surface electromyography, 1999), both as a booklet and as a CD-ROM. In this way a common body of knowledge has been created on SEMG sensors and sensor placement properties as well as practical guidelines for the proper use of SEMG.",
"title": ""
},
{
"docid": "d004de75764e87fe246617cb7e3259a6",
"text": "OBJECTIVE\nClinical decision-making regarding the prevention of depression is complex for pregnant women with histories of depression and their health care providers. Pregnant women with histories of depression report preference for nonpharmacological care, but few evidence-based options exist. Mindfulness-based cognitive therapy has strong evidence in the prevention of depressive relapse/recurrence among general populations and indications of promise as adapted for perinatal depression (MBCT-PD). With a pilot randomized clinical trial, our aim was to evaluate treatment acceptability and efficacy of MBCT-PD relative to treatment as usual (TAU).\n\n\nMETHOD\nPregnant adult women with depression histories were recruited from obstetric clinics at 2 sites and randomized to MBCT-PD (N = 43) or TAU (N = 43). Treatment acceptability was measured by assessing completion of sessions, at-home practice, and satisfaction. Clinical outcomes were interview-based depression relapse/recurrence status and self-reported depressive symptoms through 6 months postpartum.\n\n\nRESULTS\nConsistent with predictions, MBCT-PD for at-risk pregnant women was acceptable based on rates of completion of sessions and at-home practice assignments, and satisfaction with services was significantly higher for MBCT-PD than TAU. Moreover, at-risk women randomly assigned to MBCT-PD reported significantly improved depressive outcomes compared with participants receiving TAU, including significantly lower rates of depressive relapse/recurrence and lower depressive symptom severity during the course of the study.\n\n\nCONCLUSIONS\nMBCT-PD is an acceptable and clinically beneficial program for pregnant women with histories of depression; teaching the skills and practices of mindfulness meditation and cognitive-behavioral therapy during pregnancy may help to reduce the risk of depression during an important transition in many women's lives.",
"title": ""
},
{
"docid": "c2b1644546c7adec2ff9cab9fec846fe",
"text": "H.264/AVC, the result of the collaboration between the ISO/IEC Moving Picture Experts Group and the ITU-T Video Coding Experts Group, is the latest standard for video coding. The goals of this standardization effort were enhanced compression efficiency, network friendly video representation for interactive (video telephony) and non-interactive applications (broadcast, streaming, storage, video on demand). H.264/AVC provides gains in compression efficiency of up to 50% over a wide range of bit rates and video resolutions compared to previous standards. Compared to previous standards, the decoder complexity is about four times that of MPEG-2 and two times that of MPEG-4 Visual Simple Profile. This paper provides an overview of the new tools, features and complexity of H.264/AVC.",
"title": ""
},
{
"docid": "cefcf529227d2d29780b09bb87b2c66c",
"text": "This paper presents a simple method o f trajectory generation of robot manipulators based on an optimal control problem formulation. It was found recently that the jerk, the third derivative of position, of the desired trajectory, adversely affects the efficiency of the control algorithms and therefore should be minimized. Assuming joint position, velocity and acceleration t o be constrained a cost criterion containing jerk is considered. Initially. the simple environment without obstacles and constrained by the physical l imitat ions o f the jo in t angles only i s examined. For practical reasons, the free execution t ime has been used t o handle the velocity and acceleration constraints instead of the complete bounded state variable formulation. The problem o f minimizing the jerk along an arbitrary Cartesian trajectory i s formulated and given analytical solution, making this method useful for real world environments containing obstacles.",
"title": ""
},
{
"docid": "6f8e1de02845febc1a42e9600437a2fc",
"text": "A key challenge in information retrieval is that of on-line ranker evaluation: determining which one of a finite set of rankers performs the best in expectation on the basis of user clicks on presented document lists. When the presented lists are constructed using interleaved comparison methods, which interleave lists proposed by two different candidate rankers, then the problem of minimizing the total regret accumulated while evaluating the rankers can be formalized as a K-armed dueling bandit problem. In the setting of web search, the number of rankers under consideration may be large. Scaling effectively in the presence of so many rankers is a key challenge not adequately addressed by existing algorithms.\n We propose a new method, which we call mergeRUCB, that uses \"localized\" comparisons to provide the first provably scalable K-armed dueling bandit algorithm. Empirical comparisons on several large learning to rank datasets show that mergeRUCB can substantially outperform the state of the art K-armed dueling bandit algorithms when many rankers must be compared. Moreover, we provide theoretical guarantees demonstrating the soundness of our algorithm.",
"title": ""
},
{
"docid": "17130d2f31980978e3316b800b450ddd",
"text": "Automatic question-answering is a classical problem in natural language processing, which aims at designing systems that can automatically answer a question, in the same way as human does. In this work, we propose a deep learning based model for automatic question-answering. First the questions and answers are embedded using neural probabilistic modeling. Then a deep similarity neural network is trained to find the similarity score of a pair of answer and question. Then for each question, the best answer is found as the one with the highest similarity score. We first train this model on a large-scale public question-answering database, and then fine-tune it to transfer to the customer-care chat data. We have also tested our framework on a public question-answering database and achieved very good performance.",
"title": ""
},
{
"docid": "1852d9b0fab03cfc3abe5e0448198299",
"text": "Efficient exploration in high-dimensional environments remains a key challenge in reinforcement learning (RL). Deep reinforcement learning methods have demonstrated the ability to learn with highly general policy classes for complex tasks with high-dimensional inputs, such as raw images. However, many of the most effective exploration techniques rely on tabular representations, or on the ability to construct a generative model over states and actions. Both are exceptionally difficult when these inputs are complex and high dimensional. On the other hand, it is comparatively easy to build discriminative models on top of complex states such as images using standard deep neural networks. This paper introduces a novel approach, EX, which approximates state visitation densities by training an ensemble of discriminators, and assigns reward bonuses to rarely visited states. We demonstrate that EX achieves comparable performance to the state-of-the-art methods on lowdimensional tasks, and its effectiveness scales into high-dimensional state spaces such as visual domains without hand-designing features or density models.",
"title": ""
},
{
"docid": "02d8ad18b07d08084764d124dc74a94c",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "8cc28165debbb8cc430dc78098c0cd87",
"text": "Aaron Kravitz, for their help with the data collection. We are grateful to Ole-Kristian Hope, Jan Mahrt-Smith, and seminar participants at the University of Toronto for useful comments. Abstract Managers make different decisions in countries with poor protection of investor rights and poor financial development. One possible explanation is that shareholder-wealth maximizing managers face different tradeoffs in such countries (the tradeoff theory). Alternatively, firms in such countries are less likely to be managed for the benefit of shareholders because the poor protection of investor rights makes it easier for management and controlling shareholders to appropriate corporate resources for their own benefit (the agency costs theory). Holdings of liquid assets by firms across countries are consistent with Keynes' transaction and precautionary demand for money theories. Firms in countries with greater GDP per capita hold more cash as predicted. Controlling for economic development, firms in countries with more risk and with poor protection of investor rights hold more cash. The tradeoff theory and the agency costs theory can both explain holdings of liquid assets across countries. However, the fact that a dollar of cash is worth less than $0.65 to the minority shareholders of firms in such countries but worth approximately $1 in countries with good protection of investor rights and high financial development is only consistent with the agency costs theory. 2 1. Introduction Recent work shows that countries where institutions that protect investor rights are weak perform poorly along a number of dimensions. In particular, these countries have lower growth, less well-developed financial markets, and more macroeconomic volatility. 1 To measure the quality of institutions, authors have used, for instance, indices of the risk of expropriation, the level of corruption, and the rule of law. Since poor institutions could result from poor economic performance rather than cause it, authors have also used the origin of a country's legal system (La 2003) as instruments for the quality of institutions. For the quality of institutions to matter for economic performance, it has to affect the actions of firms and individuals. Recent papers examine how dividend, investment, asset composition, and capital structure policies are related to the quality of institutions. 2 In this paper, we focus more directly on why firm policies depend on the quality of institutions. The quality of institutions can affect firm policies for two different reasons. First, a country's protection of investor rights may influence the relative prices or …",
"title": ""
},
{
"docid": "1f3159097ddf38968e8fe03b7391fce5",
"text": "Participants presented with auditory, visual, or bi-sensory audio–visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component—a ‘visual dominance’ effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic–visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic–auditory stimuli. When presented with tri-sensory trials of combined auditory–visual–haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.",
"title": ""
},
{
"docid": "113cf957b47a8b8e3bbd031aa9a28ff2",
"text": "We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use nonpropositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.",
"title": ""
}
] |
scidocsrr
|
2e55b9e280c82ad6d994acd2bbf7b280
|
Wheat grass juice reduces transfusion requirement in patients with thalassemia major: a pilot study.
|
[
{
"docid": "242746fd37b45c83d8f4d8a03c1079d3",
"text": "BACKGROUND\nThe use of wheat grass (Triticum aestivum) juice for treatment of various gastrointestinal and other conditions had been suggested by its proponents for more than 30 years, but was never clinically assessed in a controlled trial. A preliminary unpublished pilot study suggested efficacy of wheat grass juice in the treatment of ulcerative colitis (UC).\n\n\nMETHODS\nA randomized, double-blind, placebo-controlled study. One gastroenterology unit in a tertiary hospital and three study coordinating centers in three major cities in Israel. Twenty-three patients diagnosed clinically and sigmoidoscopically with active distal UC were randomly allocated to receive either 100 cc of wheat grass juice, or a matching placebo, daily for 1 month. Efficacy of treatment was assessed by a 4-fold disease activity index that included rectal bleeding and number of bowel movements as determined from patient diary records, a sigmoidoscopic evaluation, and global assessment by a physician.\n\n\nRESULTS\nTwenty-one patients completed the study, and full information was available on 19 of them. Treatment with wheat grass juice was associated with significant reductions in the overall disease activity index (P=0.031) and in the severity of rectal bleeding (P = 0.025). No serious side effects were found. Fresh extract of wheat grass demonstrated a prominent tracing in cyclic voltammetry methodology, presumably corresponding to four groups of compounds that exhibit anti-oxidative properties.\n\n\nCONCLUSION\nWheat grass juice appeared effective and safe as a single or adjuvant treatment of active distal UC.",
"title": ""
}
] |
[
{
"docid": "e870d5f8daac0d13bdcffcaec4ba04c1",
"text": "In this paper the design, fabrication and test of X-band and 2-18 GHz wideband high power SPDT MMIC switches in microstrip GaN technology are presented. Such switches have demonstrated state-of-the-art performances. In particular the X-band switch exhibits 1 dB insertion loss, better than 37 dB isolation and a power handling capability at 9 GHz of better than 39 dBm at 1 dB insertion loss compression point; the wideband switch has an insertion loss lower than 2.2 dB, better than 25 dB isolation and a power handling capability of better than 38 dBm in the entire bandwidth.",
"title": ""
},
{
"docid": "7ccbb730f1ce8eca687875c632520545",
"text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: vijayssac.bhu@gmail.com; vijay.meena@icar.gov.in I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.",
"title": ""
},
{
"docid": "17ba29c670e744d6e4f9e93ceb109410",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "94638dc3bac02be0317599cbc02b5cdc",
"text": "Discussion thread classification plays an important role for Massive Open Online Courses (MOOCs) forum. Most existing methods in this filed focus on extracting text features (e.g. key words) from the content of discussions using NLP methods. However, diversity of languages used in MOOC forums results in poor expansibility of these methods. To tackle this problem, in this paper, we artificially design 23 language independent features related to structure, popularity and underlying social network of thread. Furthermore, a hybrid model which combine Gradient Boosting Decision Tree (GBDT) with Linear Regression (LR) (GBDT + LR) is employed to reduce the traditional cost of feature learning for discussion threads classification manually. Experiments are carried out on the datasets contributed by Coursera with nearly 100, 000 discussion threads of 60 courses taught in 4 different languages. Results demonstrate that our method can significantly improve the performance of discussion threads classification. It is worth drawing that the average AUC of our model is 0.832, outperforming baseline by 15%.",
"title": ""
},
{
"docid": "4aa6103dca92cf8663139baf93f78a80",
"text": "We propose a unified approach for summarization based on the analysis of video structures and video highlights. Our approach emphasizes both the content balance and perceptual quality of a summary. Normalized cut algorithm is employed to globally and optimally partition a video into clusters. A motion attention model based on human perception is employed to compute the perceptual quality of shots and clusters. The clusters, together with the computed attention values, form a temporal graph similar to Markov chain that inherently describes the evolution and perceptual importance of video clusters. In our application, the flow of a temporal graph is utilized to group similar clusters into scenes, while the attention values are used as guidelines to select appropriate sub-shots in scenes for summarization.",
"title": ""
},
{
"docid": "6793ec9b73add6514f842c2899b4ecc8",
"text": "In recent decades, the ad hoc network for vehicles has been a core network technology to provide comfort and security to drivers in vehicle environments. However, emerging applications and services require major changes in underlying network models and computing that require new road network planning. Meanwhile, blockchain widely known as one of the disruptive technologies has emerged in recent years, is experiencing rapid development and has the potential to revolutionize intelligent transport systems. Blockchain can be used to build an intelligent, secure, distributed and autonomous transport system. It allows better utilization of the infrastructure and resources of intelligent transport systems, particularly effective for crowdsourcing technology. In this paper, we proposes a vehicle network architecture based on blockchain in the smart city (Block-VN). Block-VN is a reliable and secure architecture that operates in a distributed way to build the new distributed transport management system. We are considering a new network system of vehicles, Block-VN, above them. In addition, we examine how the network of vehicles evolves with paradigms focused on networking and vehicular information. Finally, we discuss service scenarios and design principles for Block-VN.",
"title": ""
},
{
"docid": "38a0f56e760b0e7a2979c90a8fbcca68",
"text": "The Rubik’s Cube is perhaps the world’s most famous and iconic puzzle, well-known to have a rich underlying mathematical structure (group theory). In this paper, we show that the Rubik’s Cube also has a rich underlying algorithmic structure. Specifically, we show that the n×n×n Rubik’s Cube, as well as the n×n×1 variant, has a “God’s Number” (diameter of the configuration space) of Θ(n/ logn). The upper bound comes from effectively parallelizing standard Θ(n) solution algorithms, while the lower bound follows from a counting argument. The upper bound gives an asymptotically optimal algorithm for solving a general Rubik’s Cube in the worst case. Given a specific starting state, we show how to find the shortest solution in an n×O(1)×O(1) Rubik’s Cube. Finally, we show that finding this optimal solution becomes NPhard in an n×n×1 Rubik’s Cube when the positions and colors of some cubies are ignored (not used in determining whether the cube is solved).",
"title": ""
},
{
"docid": "8cd52cdc44c18214c471716745e3c00f",
"text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.",
"title": ""
},
{
"docid": "c983e94a5334353ec0e2dabb0e95d92a",
"text": "Digital family calendars have the potential to help families coordinate, yet they must be designed to easily fit within existing routines or they will simply not be used. To understand the critical factors affecting digital family calendar design, we extended LINC, an inkable family calendar to include ubiquitous access, and then conducted a month-long field study with four families. Adoption and use of LINC during the study demonstrated that LINC successfully supported the families' existing calendaring routines without disrupting existing successful social practices. Families also valued the additional features enabled by LINC. For example, several primary schedulers felt that ubiquitous access positively increased involvement by additional family members in the calendaring routine. The field trials also revealed some unexpected findings, including the importance of mobility---both within and outside the home---for the Tablet PC running LINC.",
"title": ""
},
{
"docid": "fef66948f4f647f88cc3921366f45e49",
"text": "Acoustic correlates of stress [duration, fundamental frequency (Fo), and intensity] were investigated in a language (Thai) in which both duration and Fo are employed to signal lexical contrasts. Stimuli consisted of 25 pairs of segmentally/tonally identical, syntactically ambiguous sentences. The first member of each sentence pair contained a two-syllable noun-verb sequence exhibiting a strong-strong (--) stress pattern, the second member a two-syllable noun compound exhibiting a weak-strong (--) stress pattern. Measures were taken of five prosodic dimensions of the rhyme portion of the target syllable: duration, average Fo, Fo standard deviation, average intensity, and intensity standard deviation. Results of linear regression indicated that duration is the predominant cue in signaling the distinction between stressed and unstressed syllables in Thai. Discriminant analysis showed a stress classification accuracy rate of over 99%. Findings are discussed in relation to the varying roles that Fo, intensity, and duration have in different languages given their phonological structure.",
"title": ""
},
{
"docid": "f153ee3853f40018ed0ae8b289b1efcf",
"text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.",
"title": ""
},
{
"docid": "eaf30f31b332869bc45ff1288c41da71",
"text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.",
"title": ""
},
{
"docid": "1dbe74730ec8b780d1391827491b7b45",
"text": "Collaborative filtering (CF) and contentbased filtering (CBF) have widely been used in information filtering applications, both approaches having their individual strengths and weaknesses. This paper proposes a novel probabilistic framework to unify CF and CBF, named collaborative ensemble learning. Based on content based probabilistic models for each user’s preferences (the CBF idea), it combines a society of users’ preferences to predict an active user’s preferences (the CF idea). While retaining an intuitive explanation, the combination scheme can be interpreted as a hierarchical Bayesian approach in which a common prior distribution is learned from related experiments. It does not require a global training stage and thus can incrementally incorporate new data. We report results based on two data sets, the Reuters-21578 text data set and a data base of user opionions on art images. For both data sets, collaborative ensemble achieved excellent performance in terms of recommendation accuracy. In addition to recommendation engines, collaborative ensemble learning is applicable to problems typically solved via classical hierarchical Bayes, like multisensor fusion and multitask learning.",
"title": ""
},
{
"docid": "9aee53ac010545e963f4e4697bf04ec2",
"text": "For financial institutions, the ability to predict or forecast business failures is crucial, as incorrect decisions can have direct financial consequences. Bankruptcy prediction and credit scoring are the two major research problems in the accounting and finance domain. In the literature, a number of models have been developed to predict whether borrowers are in danger of bankruptcy and whether they should be considered a good or bad credit risk. Since the 1990s, machine-learning techniques, such as neural networks and decision trees, have been studied extensively as tools for bankruptcy prediction and credit score modeling. This paper reviews 130 related journal papers from the period between 1995 and 2010, focusing on the development of state-of-the-art machine-learning techniques, including hybrid and ensemble classifiers. Related studies are compared in terms of classifier design, datasets, baselines, and other experimental factors. This paper presents the current achievements and limitations associated with the development of bankruptcy-prediction and credit-scoring models employing machine learning. We also provide suggestions for future research.",
"title": ""
},
{
"docid": "5f4235a8f9095afe6697c9fdb00e0a43",
"text": "Typically, firms decide whether or not to develop a new product based on their resources, capabilities and the return on investment that the product is estimated to generate. We propose that firms adopt a broader heuristic for making new product development choices. Our heuristic approach requires moving beyond traditional finance-based thinking, and suggests that firms concentrate on technological trajectories by combining technology roadmapping, information technology (IT) and supply chain management to make more sustainable new product development decisions. Using the proposed holistic heuristic methods, versus relying on traditional finance-based decision-making tools (e.g., emphasizing net present value or internal rate of return projections), enables firms to plan beyond the short-term and immediate set of technologies at hand. Our proposed heuristic approach enables firms to forecast technologies and markets, and hence, new product priorities in the longer term. Investments in new products should, as a result, generate returns over a longer period than traditionally expected, giving firms more sustainable investments. New products are costly and need to have a 0040-1625/$ – see front matter D 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0040-1625(03)00064-7 * Corresponding author. Tel.: +1-814-863-7133. E-mail addresses: ijpetrick@psu.edu (I.J. Petrick), aie1@psu.edu (A.E. Echols). 1 Tel.: +1-814-863-0642. I.J. Petrick, A.E. Echols / Technological Forecasting & Social Change 71 (2004) 81–100 82 durable presence in the market. Transaction costs and resources will be saved, as firms make new product development decisions less frequently. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "7f662aa8c1bab3add687755dd37f52a1",
"text": "Although researchers have discovered that Minnie G. had nearly 50 years of progression-free survival, the absence of her original surgical records have precluded anything more than speculation as to the etiology of her symptoms or the details of her admission. Following IRB approval, and through the courtesy of the Alan Mason Chesney Archives, the microfilm surgical records from the Johns Hopkins Hospital, 1896–1912 were reviewed. Using the surgical number provided in Cushing’s publications, the record for Minnie G. was recovered for further review. Cushing’s diagnosis relied largely on history and physical findings. Minnie G. presented with stigmata associated with classic Cushings Syndrome: abdominal stria, supraclavicular fat pads, and a rounded face. However, she also presented with unusual physical findings: exophthalmos, and irregular pigmentation of the extremities, face, and eyelids. A note in the chart indicates Minnie G. spoke very little English, implying the history-taking was fraught with opportunities for error. Although there remains no definitive etiology for Minnie G.’s symptoms, this report contributes additional information about her diagnosis and treatment.",
"title": ""
},
{
"docid": "b3cf36dc0536d3518f1bef31c290328f",
"text": "BACKGROUND\nHospital-acquired pressure ulcers are a serious patient safety concern, associated with poor patient outcomes and high healthcare costs. They are also viewed as an indicator of nursing care quality.\n\n\nOBJECTIVE\nTo evaluate the effectiveness of a pressure ulcer prevention care bundle in preventing hospital-acquired pressure ulcers among at risk patients.\n\n\nDESIGN\nPragmatic cluster randomised trial.\n\n\nSETTING\nEight tertiary referral hospitals with >200 beds each in three Australian states.\n\n\nPARTICIPANTS\n1600 patients (200/hospital) were recruited. Patients were eligible if they were: ≥18 years old; at risk of pressure ulcer because of limited mobility; expected to stay in hospital ≥48h and able to read English.\n\n\nMETHODS\nHospitals (clusters) were stratified in two groups by recent pressure ulcer rates and randomised within strata to either a pressure ulcer prevention care bundle or standard care. The care bundle was theoretically and empirically based on patient participation and clinical practice guidelines. It was multi-component, with three messages for patients' participation in pressure ulcer prevention care: keep moving; look after your skin; and eat a healthy diet. Training aids for patients included a DVD, brochure and poster. Nurses in intervention hospitals were trained in partnering with patients in their pressure ulcer prevention care. The statistician, recruiters, and outcome assessors were blinded to group allocation and interventionists blinded to the study hypotheses, tested at both the cluster and patient level. The primary outcome, incidence of hospital-acquired pressure ulcers, which applied to both the cluster and individual participant level, was measured by daily skin inspection.\n\n\nRESULTS\nFour clusters were randomised to each group and 799 patients per group analysed. The intraclass correlation coefficient was 0.035. After adjusting for clustering and pre-specified covariates (age, pressure ulcer present at baseline, body mass index, reason for admission, residence and number of comorbidities on admission), the hazard ratio for new pressure ulcers developed (pressure ulcer prevention care bundle relative to standard care) was 0.58 (95% CI: 0.25, 1.33; p=0.198). No adverse events or harms were reported.\n\n\nCONCLUSIONS\nAlthough the pressure ulcer prevention care bundle was associated with a large reduction in the hazard of ulceration, there was a high degree of uncertainty around this estimate and the difference was not statistically significant. Possible explanations for this non-significant finding include that the pressure ulcer prevention care bundle was effective but the sample size too small to detect this.",
"title": ""
},
{
"docid": "36bdd8eefd2f72d06a4cefe68127ce04",
"text": "Dantzig, Fulkerson, and Johnson (1954) introduced the cutting-plane method as a means of attacking the traveling salesman problem; this method has been applied to broad classes of problems in combinatorial optimization and integer programming. In this paper we discuss an implementation of Dantzig et al.'s method that is suitable for TSP instances having 1,000,000 or more cities. Our aim is to use the study of the TSP as a step towards understanding the applicability and limits of the general cutting-plane method in large-scale applications. 1. The Cutting-Plane Method The symmetric traveling salesman problem, or TSP for short, is this: given a nite number of \\cities\" along with the cost of travel between each pair of them, nd the cheapest way of visiting all of the cities and returning to your starting point. The travel costs are symmetric in the sense that traveling from city X to city Y costs just as much as traveling from Y to X; the \\way of visiting all of the cities\" is simply the order in which the cities are visited. The prominence of the TSP in the combinatorial optimization literature is to a large extent due to its success as an engine-of-discovery for techniques that have application far beyond the narrow con nes of the TSP itself. Foremost among the TSP-inspired discoveries is Dantzig, Fulkerson, and Johnson's (1954) cutting-plane method, which can be used to attack any problem minimize cx subject to x 2 S (1) such that S is a nite subset of some R and such that an eÆcient algorithm to recognize points of S is available. This method is iterative; each of its D. Applegate: Algorithms and Optimization Department, AT&T Labs { Research, Florham Park, NJ 07932, USA R. Bixby: Computational and Applied Mathematics, Rice University, Houston, TX 77005, USA V. Chv atal: Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA W. Cook: Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA ? Supported by ONR Grant N00014-03-1-0040 2 David Applegate et al. iterations begins with a linear programming (LP) relaxation of (1), meaning a problem minimize cx subject to Ax b (2) such that the polyhedron P de ned as fx : Ax bg contains S and is bounded. Since P is bounded, we can nd an optimal solution x of (2) such that x is an extreme point of P . If x belongs to S, then it constitutes an optimal solution of (1); otherwise some linear inequality is satis ed by all the points in S and violated by x ; such an inequality is called a cutting plane or simply a cut . In the latter case, we nd a nonempty family of cuts, add them to the system Ax b, and use the resulting tighter relaxation of (1) in the next iteration of the procedure. Dantzig et al. demonstrated the power of their cutting-plane method by solving a 49-city instance of the TSP, which was an impressive size in 1954. The TSP is a special case of (1) with m = n(n 1)=2, where n is the number of the cities, and with S consisting of the set of the incidence vectors of all the Hamiltonian cycles through the set V of the n cities; in this context, Hamiltonian cycles are commonly called tours. In Dantzig et al.'s attack, the initial P consists of all vectors x, with components subscripted by edges of the complete graph on V , that satisfy 0 xe 1 for all edges e (3) and P (xe : v 2 e) = 2 for all cities v. (4) (Throughout this paper, we treat the edges of a graph as two-point subsets of its vertex-set: v 2 emeans that vertex v is an endpoint of edge e; e\\Q 6= ; means that edge e has an endpoint in set Q; e Q 6= ;means that edge e has an endpoint outside setQ; and so on.) All but two of their cuts have the form P (xe : e\\Q 6= ;; e Q 6= ;) 2 such that Q is a nonempty proper subset of V . Dantzig et al. called such inequalities \\loop constraints\"; nowadays, they are commonly referred to as subtour elimination inequalities ; we are going to call them simply subtour inequalities . (As for the two exceptional cuts, Dantzig et al. give ad hoc combinatorial arguments to show that these inequalities are satis ed by incidence vectors of all tours through the 49 cities and, in a footnote, they say \\We are indebted to I. Glicksberg of Rand for pointing out relations of this kind to us.\") The original TSP algorithm of Dantzig et al. has been extended and improved by many researchers, led by the fundamental contributions of M. Grotschel and M. Padberg; surveys of this work can be found in Grotschel and Padberg (1985), Padberg and Gr otschel (1985), J unger et al. (1995, 1997), and Naddef (2002). The cutting-plane method is the core of nearly all successful approaches proposed to date for obtaining provably optimal solutions to the TSP, and it remains the only known technique for solving instances having more than several hundred cities. Beyond the TSP, the cutting-plane method has been applied to a host of NP-hard problems (see J unger et al. (1995)), and is an important component of modern Title Suppressed Due to Excessive Length 3 mixed-integer-programming codes (see Marchand et al. (1999) and Bixby et al. (2000, 2003)). In this paper we discuss an implementation of the Dantzig et al. algorithm designed for TSP instances having 1,000,000 or more cities; very large TSP instances arise is applications such as genome-sequencing (Agarwala et al. (2000)), but the primary aim of our work is to use the TSP as a means of studying issues that arise in the general application of cuttingplane algorithms for large-scale problems. Instances of this size are well beyond the reach of current (exact) solution techniques, but even in this case the cutting-plane method can be used to provide strong lower bounds on the optimal tour lengths. For example, we use cutting planes to show that the best known tour for a speci c 1,000,000-city randomly generated Euclidean instance is no more than 0.05% from optimality. This instance was created by David S. Johnson in 1994, studied by Johnson and McGeoch (1997, 2002) and included in the DIMACS (2001) challenge test set under the name \\E1M.0\". Its cities are points with integer coordinates drawn uniformly from the 1,000,000 by 1,000,000 grid; the cost of an edge is the Euclidean distance between the corresponding points, rounded to the nearest integer. The paper is organized as follows. In Section 2 we present separation algorithms for subtour inequalities and in Section 3 we present simple methods for separating a further class of TSP inequalities known as \\blossoms\"; in these two sections we consider only methods that can be easily applied to large problem instances. In Section 4 we discuss methods for adjusting cutting planes to respond to changes in the optimal LP solution x ; again, we consider only procedures that perform well on large instances. In Section 5 we discuss a linear-time implementation of the \\local cut\" technique for generating TSP inequalities by mapping the space of variables to a space of very low dimension. The core LP problem that needs to be solved in each iteration of the cutting-plane algorithm is discussed in Section 6. Data structures for storing cutting planes are treated in Section 7 and methods for handling the n(n 1)=2 edges are covered in Section 8. In Section 9 we report on computational results for a variety of test instances. The techniques developed in this paper are incorporated into the Concorde computer code of Applegate et al. (2003); the Concorde code is freely available for use in research studies. 2. Subtour Inequalities A separation algorithm for a class C of linear inequalities is an algorithm that, given any x , returns either an inequality in C that is violated by x or a failure message. Separation algorithms that return a failure message only if all inequalities in C are satis ed by x are called exact ; separation algorithms that may return a failure message even when some inequality in C is violated by x are called heuristic. 4 David Applegate et al. We present below several fast heuristics for subtour separation, and discuss brie y the Padberg and Rinaldi (1990a) exact subtour separation procedure. 2.1. The x(S; T ) notation Let V be a nite set of cities, let E be the edge-set of the complete graph on V , and let w be a vector indexed by E. Given disjoint subsets S; T of V , we write w(S; T ) to mean X (we : e 2 E; e \\ S 6= ;; e \\ T 6= ;): This notation is adopted from Ford and Fulkerson (1962); using it, the subtour inequality corresponding to S can be written as",
"title": ""
},
{
"docid": "f4e7e0ea60d9697e8fea434990409c16",
"text": "Prognostics is very useful to predict the degradation trend of machinery and to provide an alarm before a fault reaches critical levels. This paper proposes an ARIMA approach to predict the future machine status with accuracy improvement by an improved forecasting strategy and an automatic prediction algorithm. Improved forecasting strategy increases the times of model building and creates datasets for modeling dynamically to avoid using the previous values predicted to forecast and generate the predictions only based on the true observations. Automatic prediction algorithm can satisfy the requirement of real-time prognostics by automates the whole process of ARIMA modeling and forecasting based on the Box-Jenkins's methodology and the improved forecasting strategy. The feasibility and effectiveness of the approach proposed is demonstrated through the prediction of the vibration characteristic in rotating machinery. The experimental results show that the approach can be applied successfully and effectively for prognostics of machine health condition.",
"title": ""
}
] |
scidocsrr
|
9b26ccdaafcfd71b7bad0623378094f7
|
Pendulum-balanced autonomous unicycle: Conceptual design and dynamics model
|
[
{
"docid": "730d5e6577936ef3b513d0a7f4fa3641",
"text": "In this research a computer simulation for implementing attitude controller of wheeled inverted pendulum is carried out. The wheeled inverted pendulum is a kind of an inverted pendulum that has two equivalent points. In order to keep the naturally unstable equivalent point, it should be controlling the wheels persistently. Dynamic equations of the wheeled inverted pendulum are derived with considering tilted road as one of various road conditions. A linear quadratic regulator is adopted for the attitude controller since it is easy to obtain full state variables from the sensors for that control scheme and based on controllable condition of the destination as well. Various computer simulation shows that the LQR controller is doing well not only flat road but also tilted road.",
"title": ""
},
{
"docid": "54120754dc82632e6642cbd08401d2dc",
"text": "In this paper we study the dynamic modeling of a unicycle robot composed of a wheel, a frame and a disk. The unicycle can reach longitudinal stability by appropriate control to the wheel and lateral stability by adjusting appropriate torque imposed by the disk. The dynamic modeling of the unicycle robot is derived by Euler-Lagrange method. The stability and controllability of the system are analyzed according to the mathematic model. Independent simulation using MATLAB and ODE methods are then proposed respectively. Through the simulation, we confirm the validity of the two obtained models of the unicycle robot system, and provide two experimental platforms for the designing of the balance controller.",
"title": ""
}
] |
[
{
"docid": "448d70d9f5f8e5fcb8d04d355a02c8f9",
"text": "Structural health monitoring (SHM) using wireless sensor networks (WSNs) has gained research interest due to its ability to reduce the costs associated with the installation and maintenance of SHM systems. SHM systems have been used to monitor critical infrastructure such as bridges, high-rise buildings, and stadiums and has the potential to improve structure lifespan and improve public safety. The high data collection rate of WSNs for SHM pose unique network design challenges. This paper presents a comprehensive survey of SHM using WSNs outlining the algorithms used in damage detection and localization, outlining network design challenges, and future research directions. Solutions to network design problems such as scalability, time synchronization, sensor placement, and data processing are compared and discussed. This survey also provides an overview of testbeds and real-world deployments of WSNs for SH.",
"title": ""
},
{
"docid": "52c7469ba9164280a9de841537e530d7",
"text": "Monitoring the “physics” of control systems to detect attacks is a growing area of research. In its basic form a security monitor creates time-series models of sensor readings for an industrial control system and identifies anomalies in these measurements in order to identify potentially false control commands or false sensor readings. In this paper, we review previous work based on a unified taxonomy that allows us to identify limitations, unexplored challenges, and new solutions. In particular, we propose a new adversary model and a way to compare previous work with a new evaluation metric based on the trade-off between false alarms and the negative impact of undetected attacks. We also show the advantages and disadvantages of three experimental scenarios to test the performance of attacks and defenses: real-world network data captured from a large-scale operational facility, a fully-functional testbed that can be used operationally for water treatment, and a simulation of frequency control in the power grid.",
"title": ""
},
{
"docid": "28c5fada2aab828af16ee5d7bffb4093",
"text": "Based on the notion of accumulators, we propose a new cryptog raphic scheme called universal accumulators. This scheme enables one to commit to a set of values using a short accumulator and to efficiently com pute a membership witness of any value that has been accumulated. Unlike tradi tional accumulators, this scheme also enables one to efficiently compute a nonmemb ership witness of any value that has not been accumulated. We give a construc tion for universal accumulators and prove its security based on the strong RSA a ssumption. We further present a construction for dynamic universal accumula tors; this construction allows one to dynamically add and delete inputs with constan t computational cost. Our construction directly builds upon Camenisch and L ysyanskaya’s dynamic accumulator scheme. Universal accumulators can be se en as an extension to dynamic accumulators with support of nonmembership witn ess. We also give an efficient zero-knowledge proof protocol for proving that a committed value is not in the accumulator. Our dynamic universal accumulator c onstruction enables efficient membership revocation in an anonymous fashion.",
"title": ""
},
{
"docid": "148d0709c58111c2f703f68d348c09af",
"text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.",
"title": ""
},
{
"docid": "f0cabaa5dedadd65313af78c42a2df35",
"text": "In this paper, a quadrifilar spiral antenna (QSA) with an integrated module for UHF radio frequency identification (RFID) reader is presented. The proposed QSA consists of four spiral antennas with short stubs and a microstrip feed network. Also, the shielded module is integrated on the center of the ground inside the proposed QSA. In order to match the proposed QSA with the integrated module, we adopt a short stub connected from each spiral antenna to ground. Experimental result shows that the QSA of size 80 × 80 × 11.2 mm3 with the integrated module (40 × 40 × 3 mm3) has a peak gain of 3.5 dBic, an axial ratio under 2.5 dB and a 3-dB beamwidth of about 130o.",
"title": ""
},
{
"docid": "0ccfe04a4426e07dcbd0260d9af3a578",
"text": "We present an efficient algorithm to perform approximate offsetting operations on geometric models using GPUs. Our approach approximates the boundary of an object with point samples and computes the offset by merging the balls centered at these points. The underlying approach uses Layered Depth Images (LDI) to organize the samples into structured points and performs parallel computations using multiple cores. We use spatial hashing to accelerate intersection queries and balance the workload among various cores. Furthermore, the problem of offsetting with a large distance is decomposed into successive offsetting using smaller distances. We derive bounds on the accuracy of offset computation as a function of the sampling rate of LDI and offset distance. In practice, our GPU-based algorithm can accurately compute offsets of models represented using hundreds of thousands of points in a few seconds on GeForce GTX 580 GPU. We observe more than 100 times speedup over prior serial CPU-based approximate offset computation algorithms.",
"title": ""
},
{
"docid": "e72f8ad61a7927fee8b0a32152b0aa4b",
"text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.",
"title": ""
},
{
"docid": "d3682d2a9e11f80a51c53659c9b6623d",
"text": "Despite the considerable clinical impact of congenital human cytomegalovirus (HCMV) infection, the mechanisms of maternal–fetal transmission and the resultant placental and fetal damage are largely unknown. Here, we discuss animal models for the evaluation of CMV vaccines and virus-induced pathology and particularly explore surrogate human models for HCMV transmission and pathogenesis in the maternal–fetal interface. Studies in floating and anchoring placental villi and more recently, ex vivo modeling of HCMV infection in integral human decidual tissues, provide unique insights into patterns of viral tropism, spread, and injury, defining the outcome of congenital infection, and the effect of potential antiviral interventions.",
"title": ""
},
{
"docid": "5fba6770fef320c6e7dee2c848a0a503",
"text": "Person re-identification (Re-ID) aims at recognizing the same person from images taken across different cameras. To address this task, one typically requires a large amount labeled data for training an effective Re-ID model, which might not be practical for real-world applications. To alleviate this limitation, we choose to exploit a sufficient amount of pre-existing labeled data from a different (auxiliary) dataset. By jointly considering such an auxiliary dataset and the dataset of interest (but without label information), our proposed adaptation and re-identification network (ARN) performs unsupervised domain adaptation, which leverages information across datasets and derives domain-invariant features for Re-ID purposes. In our experiments, we verify that our network performs favorably against state-of-the-art unsupervised Re-ID approaches, and even outperforms a number of baseline Re-ID methods which require fully supervised data for training.",
"title": ""
},
{
"docid": "9dceccb7b171927a5cba5a16fd9d76c6",
"text": "This paper involved developing two (Type I and Type II) equal-split Wilkinson power dividers (WPDs). The Type I divider can use two short uniform-impedance transmission lines, one resistor, one capacitor, and two quarter-wavelength (λ/4) transformers in its circuit. Compared with the conventional equal-split WPD, the proposed Type I divider can relax the two λ/4 transformers and the output ports layout restrictions of the conventional WPD. To eliminate the number of impedance transformers, the proposed Type II divider requires only one impedance transformer attaining the optimal matching design and a compact size. A compact four-way equal-split WPD based on the proposed Type I and Type II dividers was also developed, facilitating a simple layout, and reducing the circuit size. Regarding the divider, to obtain favorable selectivity and isolation performance levels, two Butterworth filter transformers were integrated in the proposed Type I divider to perform filter response and power split functions. Finally, a single Butterworth filter transformer was integrated in the proposed Type II divider to demonstrate a compact filtering WPD.",
"title": ""
},
{
"docid": "39e30b2303342235780c7fff68cdc0aa",
"text": "The impact factor is only one of three standardized measures created by the Institute of Scientific Information (ISI), which can be used to measure the way a journal receives citations to its articles over time. The build-up of citations tends to follow a curve like that of Figure 1. Citations to articles published in a given year rise sharply to a peak between two and six years after publication. From this peak citations decline exponentially. The citation curve of any journal can be described by the relative size of the curve (in terms of area under the line), the extent to which the peak of the curve is close to the origin and the rate of decline of the curve. These characteristics form the basis of the ISI indicators impact factor, immediacy index and cited half-life . The impact factor is a measure of the relative size of the citation curve in years 2 and 3. It is calculated by dividing the number of current citations a journal receives to articles published in the two previous years by the number of articles published in those same years. So, for example, the 1999 impact factor is the citations in 1999 to articles published in 1997 and 1998 divided by the number of articles published in 1997 and 1998. The number that results can be thought of as the average number of citations the average article receives per annum in the two years after the publication year. The immediacy index gives a measure of the skewness of the curve, that is, the extent to which the peak of the curve lies near the origin of the graph. It is calculated by dividing the citations a journal receives in the current year by the number of articles it publishes in that year, i.e., the 1999 immediacy index is the average number of citations in 1999 to articles published in 1999. The number that results can be thought of as the initial gradient of the citation curve, a measure of how quickly items in that journal get cited upon publication. The cited half-life is a measure of the rate of decline of the citation curve. It is the number of years that the number of current citations takes to decline to 50% of its initial value; the cited half-life is 6 years in the example given in (Figure 1). It is a measure of how long articles in a journal continue to be cited after publication.",
"title": ""
},
{
"docid": "200ee6830f8b8f54ecb1c808c6712337",
"text": "DC power distribution systems for building application are gaining interest both in academic and industrial world, due to potential benefits in terms of energy efficiency and capital savings. These benefits are more evident were the end-use loads are natively DC (e.g., computers, solid-state lighting or variable speed drives for electric motors), like in data centers and commercial buildings, but also in houses. When considering the presence of onsite renewable generation, e.g. PV or micro-wind generators, storage systems and electric vehicles, DC-based building microgrids can bring additional benefits, allowing direct coupling of DC loads and DC Distributed energy Resources (DERs). A number of demonstrating installations have been built and operated around the world, and an effort is being made both in USA and Europe to study different aspects involved in the implementation of a DC distribution system (e.g. safety, protection, control) and to develop standards for DC building application. This paper discusses on the planning of an experimental DC microgrid with power hardware in the loop features at the University of Naples Federico II, Dept. of Electr. Engineering and Inf. Technologies. The microgrid consists of a 3-wire DC bus, with positive, negative and neutral poles, with a voltage range of +/-0÷400 V. The system integrates a number of DERs, like PV, Wind and Fuel Cell generators, battery and super capacitor based storage systems, EV chargers, standard loads and smart loads. It will include also a power-hardware-in-the-loop platform with the aim to enable the real time emulation of single components or parts of the microgrid, or of systems and sub-systems interacting with the microgrid, thus realizing a virtual extension of the scale of the system. Technical features and specifications of the power amplifier to be used as power interface of the PHIL platform will be discussed in detail.",
"title": ""
},
{
"docid": "92137a6f5fa3c5059bdb08db2fb5c39d",
"text": "Motivated by our ongoing efforts in the development of Refraction 2, a puzzle game targeting mathematics education, we realized that the quality of a puzzle is critically sensitive to the presence of alternative solutions with undesirable properties. Where, in our game, we seek a way to automatically synthesize puzzles that can only be solved if the player demonstrates specific concepts, concern for the possibility of undesirable play touches other interactive design domains. To frame this problem (and our solution to it) in a general context, we formalize the problem of generating solvable puzzles that admit no undesirable solutions as an NPcomplete search problem. By making two design-oriented extensions to answer set programming (a technology that has been recently applied to constrained game content generation problems) we offer a general way to declaratively pose and automatically solve the high-complexity problems coming from this formulation. Applying this technique to Refraction, we demonstrate a qualitative leap in the kind of puzzles we can reliably generate. This work opens up new possibilities for quality-focused content generators that guarantee properties over their entire combinatorial space of play.",
"title": ""
},
{
"docid": "584d2858178e4e33855103a71d7fdce4",
"text": "This paper presents 5G mm-wave phased-array antenna for 3D-hybrid beamforming. This uses MFC to steer beam for the elevation, and uses butler matrix network for the azimuth. In case of butler matrix network, this, using 180° ring hybrid coupler switch network, is proposed to get additional beam pattern and improved SRR in comparison with conventional structure. Also, it can be selected 15 of the azimuth beam pattern. When using the chip of proposed structure, it is possible to get variable kind of beam-forming over 1000. In addition, it is suitable 5G system or a satellite communication system that requires a beamforming.",
"title": ""
},
{
"docid": "292d7fbc9352dc1d2a84364d66dda308",
"text": "The ultrastructure of somatic cells present in gonadal tubules in male oyster Crassostrea gigas was investigated. These cells, named Intragonadal Somatic Cells (ISCs) have a great role in the organization of the germinal epithelium in the gonad. Immunological detection of α-tubulin tyrosine illustrates their association in columns from the basis to the lumen of the tubule, stabilized by numerous adhesive junctions. This somatic intragonadal organization delimited some different groups of germ cells along the tubule walls. In early stages of gonad development, numerous phagolysosomes were observed in the cytoplasm of ISCs indicating that these cells have in this species an essential role in the removal of waste sperm in the tubules. Variations of lipids droplets content in the cytoplasm of ISCs were also noticed along the spermatogenesis course. ISCs also present some mitochondria with tubullo-lamellar cristae.",
"title": ""
},
{
"docid": "5c31ed81a9c8d6463ce93890e38ad7b5",
"text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.",
"title": ""
},
{
"docid": "428069c804c035e028e9047d6c1f70f7",
"text": "We present a co-designed scheduling framework and platform architecture that together support compositional scheduling of real-time systems. The architecture is built on the Xen virtualization platform, and relies on compositional scheduling theory that uses periodic resource models as component interfaces. We implement resource models as periodic servers and consider enhancements to periodic server design that significantly improve response times of tasks and resource utilization in the system while preserving theoretical schedulability results. We present an extensive evaluation of our implementation using workloads from an avionics case study as well as synthetic ones.",
"title": ""
},
{
"docid": "ec9f793761ebd5199c6a2cc8c8215ac4",
"text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.",
"title": ""
},
{
"docid": "d62ab0d9f243aebea62d782ec4163c69",
"text": "Recommender Systems (RS) serve online customers in identifying those items from a variety of choices that best match their needs and preferences. In this context explanations summarize the reasons why a specific item is proposed and strongly increase the users' trust in the system's results. In this paper we propose a framework for generating knowledgeable explanations that exploits domain knowledge to transparently argue why a recommended item matches the user's preferences. Furthermore, results of an online experiment on a real-world platform show that users' perception of the usability of a recommender system is positively influenced by knowledgeable explanations and that consequently users' experience in interacting with the system, their intention to use it repeatedly as well as their commitment to recommend it to others are increased.",
"title": ""
},
{
"docid": "cd9632f63fc5e3acf0ebb1039048f671",
"text": "The authors completed an 8-week practice placement at Thrive’s garden project in Battersea Park, London, as part of their occupational therapy degree programme. Thrive is a UK charity using social and therapeutic horticulture (STH) to enable disabled people to make positive changes to their own lives (Thrive 2008). STH is an emerging therapeutic movement, using horticulture-related activities to promote the health and wellbeing of disabled and vulnerable people (Sempik et al 2005, Fieldhouse and Sempik 2007). Within Battersea Park, Thrive has a main garden with available indoor facilities and two satellite gardens. All these gardens are publicly accessible. Thrive Battersea’s service users include people with learning disabilities, mental health challenges and physical disabilities. Thrive’s group facilitators (referred to as therapists) lead regular gardening groups, aiming to enable individual performance within the group and being mindful of health conditions and circumstances. The groups have three types of participant: Thrive’s therapists, service users (known as gardeners) and volunteers. The volunteers help Thrive’s therapists and gardeners to perform STH activities. The gardening groups comprise participants from various age groups and abilities. Thrive Battersea provides ongoing contact between the gardeners, volunteers and therapists. Integrating service users and non-service users is a method of tackling negative attitudes to disability and also promoting social inclusion (Sayce 2000). Thrive Battersea is an example of a ‘role-emerging’ practice placement, which is based outside either local authorities or the National Health Service (NHS) and does not have an on-site occupational therapist (College of Occupational Therapists 2006). The connection of occupational therapy theory to practice is essential on any placement (Alsop 2006). The roleemerging nature of this placement placed additional reflective onus on the authors to identify the links between theory and practice. The authors observed how Thrive’s gardeners connected to the spaces they worked and to the people they worked with. A sense of individual Gardening and belonging: reflections on how social and therapeutic horticulture may facilitate health, wellbeing and inclusion",
"title": ""
}
] |
scidocsrr
|
cab295fa3f02872eb2dd23a2e34aaf22
|
Automatic playtesting for game parameter tuning via active learning
|
[
{
"docid": "f672af55234d85a113e45fcb65a2149f",
"text": "In recent years, the fields of Interactive Storytelling and Player Modelling have independently enjoyed increased interest in both academia and the computer games industry. The combination of these technologies, however, remains largely unexplored. In this paper, we present PaSSAGE (PlayerSpecific Stories via Automatically Generated Events), an interactive storytelling system that uses player modelling to automatically learn a model of the player’s preferred style of play, and then uses that model to dynamically select the content of an interactive story. Results from a user study evaluating the entertainment value of adaptive stories created by our system as well as two fixed, pre-authored stories indicate that automatically adapting a story based on learned player preferences can increase the enjoyment of playing a computer role-playing game for certain types of players.",
"title": ""
},
{
"docid": "326493520ccb5c8db07362f412f57e62",
"text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.",
"title": ""
}
] |
[
{
"docid": "e07756fb1ae9046c3b8c29b85a00bf0f",
"text": "We present a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map. While mode detection is done by a standard graph-based hill-climbing scheme, the novelty of our approach resides in its use of topological persistence to guide the merging of clusters. Our algorithm provides additional feedback in the form of a set of points in the plane, called a persistence diagram (PD), which provably reflects the prominences of the modes of the density. In practice, this feedback enables the user to choose relevant parameter values, so that under mild sampling conditions the algorithm will output the correct number of clusters, a notion that can be made formally sound within persistence theory. In addition, the output clusters have the property that their spatial locations are bound to the ones of the basins of attraction of the peaks of the density.\n The algorithm only requires rough estimates of the density at the data points, and knowledge of (approximate) pairwise distances between them. It is therefore applicable in any metric space. Meanwhile, its complexity remains practical: although the size of the input distance matrix may be up to quadratic in the number of data points, a careful implementation only uses a linear amount of memory and takes barely more time to run than to read through the input.",
"title": ""
},
{
"docid": "0019353f6d685f459516bccaa9d1f187",
"text": "Since the Global Positioning System (GPS) was launched, significant progress has been made in GPS receiver technology but the multipath error remains an unsolved problem. As solutions based on signal processing are not adequate, the most effective approach to discriminate between direct and multipath waves is to specify new and more restrictive criteria in the design of the receiving antenna. An innovative low profile, lightweight dual band (L1+L2) GPS radiator with a high multipath-rejection capability is presented. The proposed solution has been realized by two stacked shorted annular elliptical patch antennas. In what follows, a detailed account of the design process and antenna performances is given, presenting both simulated and experimental results.",
"title": ""
},
{
"docid": "c105fdde48fdcbab369dc9698dc9fce9",
"text": "Social link identification SIL, that is to identify accounts across different online social networks that belong to the same user, is an important task in social network applications. Most existing methods to solve this problem directly applied machine-learning classifiers on features extracted from user’s rich information. In practice, however, only some limited user information can be obtained because of privacy concerns. In addition, we observe the existing methods cannot handle huge amount of potential account pairs from different OSNs. In this paper, we propose an effective SIL method to address the above two challenges by expanding known anchor links (seed account pairs belonging to the same person). In particular, we leverage potentially useful information possessed by the existing anchor link, and then develop a local expansion model to identify new social links, which are taken as a generated anchor link to be used for iteratively identifying additional new social link. We evaluate our method on two most popular Chinese social networks. Experimental results show our proposed method achieves much better performance in terms of both the number of correct account pairs and efficiency.",
"title": ""
},
{
"docid": "7908e315d84cf916fb4a61a083be7fe6",
"text": "A base station antenna with dual-broadband and dual-polarization characteristics is presented in this letter. The proposed antenna contains four parts: a lower-band element, an upper-band element, arc-shaped baffle plates, and a box-shaped reflector. The lower-band element consists of two pairs of dipoles with additional branches for bandwidth enhancement. The upper-band element embraces two crossed hollow dipoles and is nested inside the lower-band element. Four arc-shaped baffle plates are symmetrically arranged on the reflector for isolating the lower- and upper-band elements and improving the radiation performance of upper-band element. As a result, the antenna can achieve a bandwidth of 50.6% for the lower band and 48.2% for the upper band when the return loss is larger than 15 dB, fully covering the frequency ranges 704–960 and 1710–2690 MHz for 2G/3G/4G applications. Measured port isolation larger than 27.5 dB in both the lower and upper bands is also obtained. At last, an array that consists of two lower-band elements and five upper-band elements is discussed for giving an insight into the future array design.",
"title": ""
},
{
"docid": "ec1e79530ef20e2d8610475d07ee140d",
"text": "a School of Social Sciences, Faculty of Health, Education and Social Sciences, University of the West of Scotland, High St., Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom b School of Computing, Faculty of Science and Technology, University of the West of Scotland, Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom c School of Psychological Sciences and Health, Faculty of Humanities and Social Science, University of Strathclyde, Glasgow, Scotland, United Kingdom",
"title": ""
},
{
"docid": "8b4e1dde6a9c004ae6095d3ff5232595",
"text": "The authors tested the effect of ambient scents in a shopping mall environment. Two competing models were used. The first model is derived from the environmental psychology research stream by Mehrabian and Russel (1974) and Donovan and Rossiter (1982) where atmospheric cues generate pleasure and arousal, and, in turn, an approach/avoidance behavior. The emotion–cognition model is supported by Zajonc and Markus (1984). The second model to be tested is based on Lazarus’ (1991) cognitive theory of emotions. In this latter model, shoppers’ perceptions of the retail environment and product quality mediate the effects of ambient scent cues on emotions and spending behaviors. Positive affect is enhanced from shoppers’ evaluations. Using structural equation modeling the authors conclude that the cognitive theory of emotions better explains the effect of ambient scent. Managerial implications are discussed. D 2003 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "4efa56d9c2c387608fe9ddfdafca0f9a",
"text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.",
"title": ""
},
{
"docid": "4d6a7fc4bf89fb576142f6f4a0559db9",
"text": "In this research, we propose a particular version of KNN (K Nearest Neighbor) where the similarity between feature vectors is computed considering the similarity among attributes or features as well as one among values. The task of text summarization is viewed into the binary classification task where each paragraph or sentence is classified into the essence or non-essence, and in previous works, improved results are obtained by the proposed version in the text classification and clustering. In this research, we define the similarity which considers both attributes and attribute values, modify the KNN into the version based on the similarity, and use the modified version as the approach to the text summarization task. As the benefits from this research, we may expect the more compact representation of data items and the better performance. Therefore, the goal of this research is to implement the text summarization algorithm which represents data items more compactly and provides the more reliability.",
"title": ""
},
{
"docid": "a089f48b99c192f385c287ae98f297ae",
"text": "Video object segmentation targets segmenting a specific object throughout a video sequence when given only an annotated first frame. Recent deep learning based approaches find it effective to fine-tune a general-purpose segmentation model on the annotated frame using hundreds of iterations of gradient descent. Despite the high accuracy that these methods achieve, the fine-tuning process is inefficient and fails to meet the requirements of real world applications. We propose a novel approach that uses a single forward pass to adapt the segmentation model to the appearance of a specific object. Specifically, a second meta neural network named modulator is trained to manipulate the intermediate layers of the segmentation network given limited visual and spatial information of the target object. The experiments show that our approach is 70× faster than fine-tuning approaches and achieves similar accuracy. Our model and code have been released at https://github.com/linjieyangsc/video_seg.",
"title": ""
},
{
"docid": "7b6640e2d964ef3ee2597df9eed52073",
"text": "Differential Fault Analysis (DFA), aided by sophisticated mathematical analysis techniques for ciphers and precise fault injection methodologies, has become a potent threat to cryptographic implementations. In this paper, we propose, to the best of the our knowledge, the first “DFA-aware” physical design automation methodology, that effectively mitigates the threat posed by DFA. We first develop a novel floorplan heuristic, which resists the simultaneous corruption of cipher states necessary for successful fault attack, by exploiting the fact that most fault injections are localized in practice. Our technique results in the computational complexity of the fault attack to shoot up to exhaustive search levels, making them practically infeasible. In the second part of the work, we develop a routing mechanism, which tackles more precise and costly fault injection techniques, like laser and electromagnetic guns. We propose a routing technique by integrating a specially designed ring oscillator based sensor circuit around the potential fault attack targets without incurring any performance overhead. We demonstrate the effectiveness of our technique by applying it on state of the art ciphers.",
"title": ""
},
{
"docid": "00b80ec74135b3190a50b4e0d83af17a",
"text": "Many organizations aspire to adopt agile processes to take advantage of the numerous benefits that they offer to an organization. Those benefits include, but are not limited to, quicker return on investment, better software quality, and higher customer satisfaction. To date, however, there is no structured process (at least that is published in the public domain) that guides organizations in adopting agile practices. To address this situation, we present the agile adoption framework and the innovative approach we have used to implement it. The framework consists of two components: an agile measurement index, and a four-stage process, that together guide and assist the agile adoption efforts of organizations. More specifically, the Sidky Agile Measurement Index (SAMI) encompasses five agile levels that are used to identify the agile potential of projects and organizations. The four-stage process, on the other hand, helps determine (a) whether or not organizations are ready for agile adoption, and (b) guided by their potential, what set of agile practices can and should be introduced. To help substantiate the “goodness” of the Agile Adoption Framework, we presented it to various members of the agile community, and elicited responses through questionnaires. The results of that substantiation effort are encouraging, and are also presented in this paper.",
"title": ""
},
{
"docid": "371ab18488da4e719eda8838d0d42ba8",
"text": "Research reveals dramatic differences in the ways that people from different cultures perceive the world around them. Individuals from Western cultures tend to focus on that which is object-based, categorically related, or self-relevant whereas people from Eastern cultures tend to focus more on contextual details, similarities, and group-relevant information. These different ways of perceiving the world suggest that culture operates as a lens that directs attention and filters the processing of the environment into memory. The present review describes the behavioral and neural studies exploring the contribution of culture to long-term memory and related processes. By reviewing the extant data on the role of various neural regions in memory and considering unifying frameworks such as a memory specificity approach, we identify some promising directions for future research.",
"title": ""
},
{
"docid": "eda40814ecaecbe5d15ccba49f8a0d43",
"text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem",
"title": ""
},
{
"docid": "d5017531ec03b489b565f3c517d4756e",
"text": "Layouts are important for graphic design and scene generation. We propose a novel generative adversarial network, named as LayoutGAN, that synthesizes graphic layouts by modeling semantic and geometric relations of 2D elements. The generator of LayoutGAN takes as input a set of randomly placed 2D graphic elements and uses self-attention modules to refine their semantic and geometric parameters jointly to produce a meaningful layout. Accurate alignment is critical for good layouts. We thus propose a novel differentiable wireframe rendering layer that maps the generated layout to a wireframe image, upon which a CNNbased discriminator is used to optimize the layouts in visual domain. We validate the effectiveness of LayoutGAN in various experiments including MNIST digit generation, document layout generation, clipart abstract scene generation and tangram graphic design.",
"title": ""
},
{
"docid": "74373dd009fc6285b8f43516d8e8bf2c",
"text": "Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: hsharifzadeh@unitec.ac.nz (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria.",
"title": ""
},
{
"docid": "ed82ac5cf6cf4173fde52a25c17b86aa",
"text": "The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.",
"title": ""
},
{
"docid": "452eee7c8f199ce8ce6d89c14b08ac8f",
"text": "Interactional aerodynamics of multi-rotor flows has been studied for a quadcopter representing a generic quad tilt-rotor aircraft in hover. The objective of the present study is to investigate the effects of the separation distances between rotors, and also fuselage and wings on the performance and efficiency of multirotor systems. Three-dimensional unsteady Navier-Stokes equations are solved using a spatially 5 order accurate scheme, dual-time stepping, and the Detached Eddy Simulation turbulence model. The results show that the separation distances as well as the wings have significant effects on the vertical forces of quadroror systems in hover. Understanding interactions in multi-rotor flows would help improve the design of next generation multi-rotor drones.",
"title": ""
},
{
"docid": "86627f7ca48eda4985b979c9b137ba2a",
"text": "In this paper we present the TWitterBuonaScuola corpus (TW-BS), a novel Italian linguistic resource for Sentiment Analysis, developed with the main aim of analyzing the online debate on the controversial Italian political reform “Buona Scuola” (Good school), aimed at reorganizing the national educational and training systems. We describe the methodologies applied in the collection and annotation of data. The collection has been driven by the detection of the hashtags mainly used by the participants to the debate, while the annotation has been focused on sentiment polarity and irony, but also extended to mark the aspects of the reform that were mainly discussed in the debate. An in-depth study of the disagreement among annotators is included. We describe the collection and annotation stages, and the in-depth analysis of disagreement made with Crowdflower, a crowdsourcing annotation platform.",
"title": ""
},
{
"docid": "57d5db0feaa35543e15f2417cd4f2db5",
"text": "Images are static and lack important depth information about the underlying 3D scenes. We introduce interactive images in the context of man-made environments wherein objects are simple and regular, share various non-local relations (e.g., coplanarity, parallelism, etc.), and are often repeated. Our interactive framework creates partial scene reconstructions based on cuboid-proxies with minimal user interaction. It subsequently allows a range of intuitive image edits mimicking real-world behavior, which are otherwise difficult to achieve. Effectively, the user simply provides high-level semantic hints, while our system ensures plausible operations by conforming to the extracted non-local relations. We demonstrate our system on a range of real-world images and validate the plausibility of the results using a user study.",
"title": ""
},
{
"docid": "c0484f3055d7e7db8dfea9d4483e1e06",
"text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.",
"title": ""
}
] |
scidocsrr
|
239ab6b297dd8979ccc47661ab1c35d1
|
A Petri Nets Model for Blockchain Analysis
|
[
{
"docid": "820e40862c9caff8f041ec34a4d0e4a4",
"text": "Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.",
"title": ""
},
{
"docid": "05610fd0e6373291bdb4bc28cf1c691b",
"text": "In this work, we acknowledge the need for software engineers to devise specialized tools and techniques for blockchain-oriented software development. Ensuring effective testing activities, enhancing collaboration in large teams, and facilitating the development of smart contracts all appear as key factors in the future of blockchain-oriented software development.",
"title": ""
}
] |
[
{
"docid": "15de232c8daf22cf1a1592a21e1d9df3",
"text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.",
"title": ""
},
{
"docid": "afe24ba1c3f3423719a98e1a69a3dc70",
"text": "This brief presents a nonisolated multilevel linear amplifier with nonlinear component (LINC) power amplifier (PA) implemented in a standard 0.18-μm complementary metal-oxide- semiconductor process. Using a nonisolated power combiner, the overall power efficiency is increased by reducing the wasted power at the combined out-phased signal; however, the efficiency at low power still needs to be improved. To further improve the efficiency of the low-power (LP) mode, we propose a multiple-output power-level LINC PA, with load modulation implemented by switches. In addition, analysis of the proposed design on the system level as well as the circuit level was performed to optimize its performance. The measurement results demonstrate that the proposed technique maintains more than 45% power-added efficiency (PAE) for peak power at 21 dB for the high-power mode and 17 dBm for the LP mode at 600 MHz. The PAE for a 6-dB peak-to-average ratio orthogonal frequency-division multiplexing modulated signal is higher than 24% PAE in both power modes. To the authors' knowledge, the proposed output-phasing PA is the first implemented multilevel LINC PA that uses quarter-wave lines without multiple power supply sources.",
"title": ""
},
{
"docid": "a856b4fc2ec126ee3709d21ff4c3c49c",
"text": "In this work, glass fiber reinforced epoxy composites were fabricated. Epoxy resin was used as polymer matrix material and glass fiber was used as reinforcing material. The main focus of this work was to fabricate this composite material by the cheapest and easiest way. For this, hand layup method was used to fabricate glass fiber reinforced epoxy resin composites and TiO2 material was used as filler material. Six types of compositions were made with and without filler material keeping the glass fiber constant and changing the epoxy resin with respect to filler material addition. Mechanical properties such as tensile, impact, hardness, compression and flexural properties were investigated. Additionally, microscopic analysis was done. The experimental investigations show that without filler material the composites exhibit overall lower value in mechanical properties than with addition of filler material in the composites. The results also show that addition of filler material increases the mechanical properties but highest values were obtained for different filler material addition. From the obtained results, it was observed that composites filled by 15wt% of TiO2 particulate exhibited maximum tensile strength, 20wt% of TiO2 particulate exhibited maximum impact strength, 25wt% of TiO2 particulate exhibited maximum hardness value, 25wt% of TiO2 particulate exhibited maximum compressive strength, 20wt% of TiO2 particulate exhibited maximum flexural strength.",
"title": ""
},
{
"docid": "4e70a516022ee268a3e0f401e54db339",
"text": "Mango (Mangifera indica Linn.) is one of the most important tropical fruits in the world. During processing of mango, by-products such as peel and kernel are generated. The oil of mango seed kernel was extracted using Soxhlet apparatus and fatty acid composition shows that mango seed kernel oil consist of about 44–48% saturated fatty acids and 52–56% unsaturated. Stearic acid (37.73%) was the main saturated fatty acid,while oleic acid(46.22%) was the major unsaturated fatty acid in mango seed kernel oil. The specific gravity(0.9 at 40 , refractive index(1.443 at 40 , peroxide value(1.2 meq/kg), unsaponifitable matter (2.9%),free fatty acid(1.5%),saponification number(195) ,iodine number(55),melting point(30 , and total lavibond colour(25) for mango seed kernel oil was determined. Result shows that mango seed kernel oil is more stable than many other vegetable oils rich in unsaturated fatty acids. Such oils seem to be suitable for blending with vegetable oils, stearin manufacturing, confectionery industry or/and in the soap industry.",
"title": ""
},
{
"docid": "d516f7e8fde00d2e26c1f1f62a32fd03",
"text": "Smart grids have been proposed as a mechanism to modernize energy grids. In a smart grid, sensors, computers and communication networks are integrated into the power network. As the number of sensors and the ability to collect large amounts of data increases, big data analysis techniques are required to support data analysis and decision making. This paper discusses big data challenges and techniques in the context of smart grids and illustrates a practical scenario visualizing diverse data sources.",
"title": ""
},
{
"docid": "84a59daa79201364250563e21f891290",
"text": "Recently, the example-based single image spectral reconstruction from RGB images task aka spectral super-resolution was approached by means of deep learning by Galliani et al. [1]. The proposed very deep convolutional neural network (CNN) achieved superior performance on recent large benchmarks. However, Aeschbacher et al. [2] showed that comparable performance can be achieved by shallow learning method based on A+, a method introduced for image superresolution by Timofte et al. [3]. In this paper, we propose a moderately deep CNN model and substantially improve the reported performance on three spectral reconstruction standard benchmarks: ICVL, CAVE, and NUS.",
"title": ""
},
{
"docid": "2613f5af633cdd2575b4fbd79cd04120",
"text": "A widely accepted view of the human information processing system is that most of the symbol manipulation takes place in a central processor, sometimes referred to as the active memory (Neisser, 1967), working memory (Newell & Simon, 1963), operational memory (Posner, 1967), or the immediate processor (Newell, 1973). This paper is concerned with the rapid mental operations of the central processor and how they are reflected by the pattern and duration of eye fixations during a task.involving visual input. We will examine the basic operators, parameters, and control structure of the central processor as it performs such tasks as the comparison of rotated figures (Shepard & Metzler, 1971), mental arithmetic (Parkman, 1971), sentence verification (Carpenter & Just, 1975), and memory scanning (Stemberg, 1969). These tasks generally take less than 5 or 10 set to complete, and can be decomposed into very rapid mental operations, often estimated to consume between 50 to 800 msec each. The goals of this paper are to demonstrate that the locus, duration, and sequence of the eye fixations can be closely tied to the activity of the central processor, and to exploit this relation in investigating the fine structure of the processor’s activity in a number of cognitive tasks. The primary proposal is that the eye fixates the referent of the symbol currently being processed if the referent is in view. That is, the fixation may reflect what is at the “top of the stack.” If several symbols are",
"title": ""
},
{
"docid": "5357d90787090ec822d0b540d09b6c6b",
"text": "Providing accurate attendance marking system in real-time is challenging. It is tough to mark the attendance of a student in the large classroom when there are many students attending the class. Many attendance management systems have been implemented in the recent research. However, the attendance management system based on facial recognition still has issues. Thus many research have been conducted to improve system. This paper reviewed the previous works on attendance management system based on facial recognition. This article does not only provide the literature review on the earlier work or related work, but it also provides the deep analysis of Principal Component Analysis, discussion, suggestions for future work.",
"title": ""
},
{
"docid": "67bc52adf7c42c7a0ef6178ce4990e57",
"text": "Recognizing oneself as the owner of a body and the agent of actions requires specific mechanisms which have been elucidated only recently. One of these mechanisms is the monitoring of signals arising from bodily movements, i.e. the central signals which contribute to the generation of the movements and the sensory signals which arise from their execution. The congruence between these two sets of signals is a strong index for determining the experiences of ownership and agency, which are the main constituents of the experience of being an independent self. This mechanism, however, does not account from the frequent cases where an intention is generated but the corresponding action is not executed. In this paper, it is postulated that such covert actions are internally simulated by activating specific cortical networks or representations of the intended actions. This process of action simulation is also extended to the observation and the recognition of actions performed or intended by other agents. The problem of disentangling representations that pertain to self-intended actions from those that pertain to actions executed or intended by others, is a critical one for attributing actions to their respective agents. Failure to recognize one's own actions and misattribution of actions may result from pathological conditions which alter the readability of these representations.",
"title": ""
},
{
"docid": "ce41d07b369635c5b0a914d336971f8e",
"text": "In this paper, a fuzzy controller for an inverted pendulum system is presented in two stages. These stages are: investigation of fuzzy control system modeling methods and solution of the “Inverted Pendulum Problem” by using Java programming with Applets for internet based control education. In the first stage, fuzzy modeling and fuzzy control system investigation, Java programming language, classes and multithreading were introduced. In the second stage specifically, simulation of the inverted pendulum problem was developed with Java Applets and the simulation results were given. Also some stability concepts are introduced. c © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dde46ae891a0e9086e30ffd13ea45479",
"text": "This paper investigates some plausible models of evolution of industrial districts (IDs) and clusters in light of the peculiar current features of technology and technological change. An insightful explanation of the variety of possible evolution of industrial clusters is provided focusing on the concept of ‘ technological regime s’. Within this interpretative framework, the authors carried out original field studies and survey questionnaires in Italy and Taiwan to gather microeconomic evidence on the restructuring efforts and sources of competitiveness of selected smalland medium-sized enterprises (SMEs). The shift in the technological paradigm, that applies to all sectors, requires a substantial industrial reorganisation. Firms traditionally operating within industrial districts need to reorganise their knowledge linkages from a cluster-based approach to a global and broader approach. A key explanation of the success of SMEs competing in globalized high-tech industries, supported by our survey evidence, is the co-evolution of domestic and international knowledge linkages. Inter-firm and inter-institution linkages need to be built to provide local SMEs with the necessary externalities to cope with the dual challenge of knowledge creation and internationalisation. In Taiwan, this took the form of global production networks. 2003 Elsevier Science Ltd. All rights reserved. JEL classification:O32; O33; R12",
"title": ""
},
{
"docid": "84a1ccd4b32b2b557c3702178ececfc7",
"text": "Embedded systems are at the core of many security-sensitive and safety-critical applications, including automotive, industrial control systems, and critical infrastructures. Existing protection mechanisms against (software-based) malware are inflexible, too complex, expensive, or do not meet real-time requirements.\n We present TyTAN, which, to the best of our knowledge, is the first security architecture for embedded systems that provides (1) hardware-assisted strong isolation of dynamically configurable tasks and (2) real-time guarantees. We implemented TyTAN on the Intel® Siskiyou Peak embedded platform and demonstrate its efficiency and effectiveness through extensive evaluation.",
"title": ""
},
{
"docid": "f80dedfb0d0f7e5ba068e582517ac6f8",
"text": "We present a physically-based approach to grasping and manipulation of virtual objects that produces visually realistic results, addresses the problem of visual interpenetration of hand and object models, and performs force rendering for force-feedback gloves in a single framework. Our approach couples tracked hand configuration to a simulation-controlled articulated hand model using a system of linear and torsional spring-dampers. We discuss an implementation of our approach that uses a widely-available simulation tool for collision detection and response. We illustrate the resulting behavior of the virtual hand model and of grasped objects, and we show that the simulation rate is sufficient for control of current force-feedback glove designs. We also present a prototype of a system we are developing to support natural whole-hand interactions in a desktop-sized workspace.",
"title": ""
},
{
"docid": "3fd9fd52be3153fe84f2ea6319665711",
"text": "The theories of supermodular optimization and games provide a framework for the analysis of systems marked by complementarity. We summarize the principal results of these theories and indicate their usefulness by applying them to study the shift to 'modern manufacturing'. We also use them to analyze the characteristic features of the Lincoln Electric Company's strategy and structure.",
"title": ""
},
{
"docid": "0f39f88747145f730731bc8dd108b3ac",
"text": "To cope with increasing amount of cyber threats, organizations need to share cybersecurity information beyond the borders of organizations, countries, and even languages. Assorted organizations built repositories that store and provide XML-based cybersecurity information on the Internet. Among them are NVD [1], OSVDB [2], and JVN [3], and more cybersecurity information from various organizations from various countries will be available in the Internet. However, users are unaware of all of them. To advance information sharing, users need to be aware of them and be capable of identifying and locating cybersecurity information across such repositories by the parties who need that, and then obtaining the information over networks. This paper proposes a discovery mechanism, which identifies and locates sources and types of cybersecurity information and exchanges the information over networks. The mechanism uses the ontology of cybersecurity information [4] to incorporate assorted format of such information so that it can maintain future extensibility. It generates RDF-based metadata from XML-based cybersecurity information through the use of XSLT. This paper also introduces an implementation of the proposed mechanism and discusses extensibility and usability of the proposed mechanism.",
"title": ""
},
{
"docid": "3d2666ab3b786fd02bb15e81b0eaeb37",
"text": "BACKGROUND\n The analysis of nursing errors in clinical management highlighted that clinical handover plays a pivotal role in patient safety. Changes to handover including conducting handover at the bedside and the use of written handover summary sheets were subsequently implemented.\n\n\nAIM\n The aim of the study was to explore nurses' perspectives on the introduction of bedside handover and the use of written handover sheets.\n\n\nMETHOD\n Using a qualitative approach, data were obtained from six focus groups containing 30 registered and enrolled (licensed practical) nurses. Thematic analysis revealed several major themes.\n\n\nFINDINGS\n Themes identified included: bedside handover and the strengths and weaknesses; patient involvement in handover, and good communication is about good communicators. Finally, three sources of patient information and other issues were also identified as key aspects.\n\n\nCONCLUSIONS\n How bedside handover is delivered should be considered in relation to specific patient caseloads (patients with cognitive impairments), the shift (day, evening or night shift) and the model of service delivery (team versus patient allocation).\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n Flexible handover methods are implicit within clinical setting issues especially in consideration to nursing teamwork. Good communication processes continue to be fundamental for successful handover processes.",
"title": ""
},
{
"docid": "bd13f54cd08fe2626fe8de4edce49197",
"text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "d507fc48f5d2500251b72cb2ebc94d40",
"text": "We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals' physical locations over time.",
"title": ""
},
{
"docid": "2444b0ae9920e55cf0e3e329b048a2e8",
"text": "Concurrent Clean is an experimental, lazy, higher-order parallel functional programming language based on term graph rewriting. An important diierence with other languages is that in Clean graphs are manipulated and not terms. This can be used by the programmer to control communication and sharing of computation. Cyclic structures can be deened. Concurrent Clean furthermore allows to control the (parallel) order of evaluation to make eecient evaluation possible. With help of sequential annotations the default lazy evaluation can be locally changed into eager evaluation. The language enables the deenition of partially strict data structures which make a whole new class of algorithms feasible in a functional language. A powerful and fast strictness analyser is incorporated in the system. The quality of the code generated by the Clean compiler has been greatly improved such that it is one of the best code generators for a lazy functional language. Two very powerful parallel annotations enable the programmer to deene concurrent functional programs with arbitrary process topologies. Concurrent Clean is set up in such a way that the eeciency achieved for the sequential case can largely be maintained for a parallel implementation on loosely coupled parallel machine architectures.",
"title": ""
},
{
"docid": "9afdeab9abb1bfde45c6e9f922181c6b",
"text": "Aiming at the need for autonomous learning in reinforcement learning (RL), a quantitative emotion-based motivation model is proposed by introducing psychological emotional factors as the intrinsic motivation. The curiosity is used to promote or hold back agents' exploration of unknown states, the happiness index is used to determine the current state-action's happiness level, the control power is used to indicate agents' control ability over its surrounding environment, and together to adjust agents' learning preferences and behavioral patterns. To combine intrinsic emotional motivations with classic RL, two methods are proposed. The first method is to use the intrinsic emotional motivations to explore unknown environment and learn the environment transitioning model ahead of time, while the second method is to combine intrinsic emotional motivations with external rewards as the ultimate joint reward function, directly to drive agents' learning. As the result shows, in the simulation experiments in the rat foraging in maze scenario, both methods have achieved relatively good performance, compared with classic RL purely driven by external rewards.",
"title": ""
}
] |
scidocsrr
|
47648205f8e43ab86dc89736a122e34a
|
What do cMOOC participants talk about in social media?: a topic analysis of discourse in a cMOOC
|
[
{
"docid": "6f2162f883fce56eaa6bd8d0fbcedc0b",
"text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.",
"title": ""
}
] |
[
{
"docid": "c26abad7f3396faa798a74cfb23e6528",
"text": "Recent advances in seismic sensor technology, data acquisition systems, digital communications, and computer hardware and software make it possible to build reliable real-time earthquake information systems. Such systems provide a means for modern urban regions to cope effectively with the aftermath of major earthquakes and, in some cases, they may even provide warning, seconds before the arrival of seismic waves. In the long term these systems also provide basic data for mitigation strategies such as improved building codes.",
"title": ""
},
{
"docid": "a9c3a5cc9ca97f0b6d518396165faed3",
"text": "Digital images are convenient media for describ ing and storing spatial temporal spectral and physical components of information contained in a variety of domains e g aerial satellite images in remote sensing medical images in telemedicine ngerprints in forensics museum collections in art history and registration of trademarks and logos Euler number is a fundamental topological feature of an image The e ciency of computation of topo logical features of an image is critical for many digi tal imaging applications including image matching database retrieval and computer vision that re quire real time response In this paper a novel algorithm for computing the Euler number of a bi nary image based on divide and conquer paradigm is proposed which outperforms signi cantly the conventional techniques used in image processing tools The algorithm can be easily parallelized for computing the Euler number of an N N image in O N time with O N processors Using a simple architecture the proposed method can be imple mented as a special purpose VLSI chip Index Terms Digital imaging Euler number VLSI parallel processing binary image",
"title": ""
},
{
"docid": "20f05b48fa88283d649a3bcadf2ed818",
"text": "A great variety of native and introduced plant species were used as foods, medicines and raw materials by the Rumsen and Mutsun Costanoan peoples of central California. The information presented here has been abstracted from original unpublished field notes recorded during the 1920s and 1930s by John Peabody Harrington, who also directed the collection of some 500 plant specimens. The nature of Harrington’s data and their significance for California ethnobotany are described, followed by a summary of information on the ethnographic uses of each plant.",
"title": ""
},
{
"docid": "617338f7d4d7a7f87ef196af045eb8c3",
"text": "The lungs exchange air with the external environment via the pulmonary airways. Computed tomography (CT) scanning can be used to obtain detailed images of the pulmonary anatomy, including the airways. These images have been used to measure airway geometry, study airway reactivity, and guide surgical interventions. Prior to these applications, airway segmentation can be used to identify the airway lumen in the CT images. Airway tree segmentation can be performed manually by an image analyst, but the complexity of the tree makes manual segmentation tedious and extremely time-consuming. We describe a fully automatic technique for segmenting the airway tree in three-dimensional (3-D) CT images of the thorax. We use grayscale morphological reconstruction to identify candidate airways on CT slices and then reconstruct a connected 3-D airway tree. After segmentation, we estimate airway branchpoints based on connectivity changes in the reconstructed tree. Compared to manual analysis on 3-mm-thick electron-beam CT images, the automatic approach has an overall airway branch detection sensitivity of approximately 73%.",
"title": ""
},
{
"docid": "3da8cb73f3770a803ca43b8e2a694ccc",
"text": "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.",
"title": ""
},
{
"docid": "f0a7d1543bb056d7ea02c4f11a684d28",
"text": "The computer vision community has reached a point when it can start considering high-level reasoning tasks such as the \"communicative intents\" of images, or in what light an image portrays its subject. For example, an image might imply that a politician is competent, trustworthy, or energetic. We explore a variety of features for predicting these communicative intents. We study a number of facial expressions and body poses as cues for the implied nuances of the politician's personality. We also examine how the setting of an image (e.g. kitchen or hospital) influences the audience's perception of the portrayed politician. Finally, we improve the performance of an existing approach on this problem, by learning intermediate cues using convolutional neural networks. We show state of the art results on the Visual Persuasion dataset of Joo et al. [11].",
"title": ""
},
{
"docid": "49975701cebadff84d863fb7ca4f2615",
"text": "Mobile ad hoc are gaining popularity because of availability of low cost mobile devices and its ability to provide instant wireless networking capabilities where implementation of wired network is not possible or costly. MANETs are vulnerable to various types of attack because of its features like continuous changing topology, resource constraints and unavailability of any centralized infrastructure. Many denial of service type of attacks are possible in the MANET and one of these type attack is flooding attack in which malicious node sends the useless packets to consume the valuable network resources. Flooding attack is possible in all most all on demand routing protocol. In this paper we present a novel technique to mitigate the effect of RREQ flooding attack in MANET using trust estimation function in DSR on demand routing protocol.",
"title": ""
},
{
"docid": "9ade6407ce2603e27744df1b03728bfc",
"text": "We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.",
"title": ""
},
{
"docid": "30980f1bddafb5385641d2465f4f9256",
"text": "Recently, Linux container technology has been gaining attention as it promises to transform the way software is developed and deployed. The portability and ease of deployment makes Linux containers an ideal technology to be used in scientific workflow platforms. AWE/Shock is a scalable data analysis platform designed to execute data intensive scientific workflows. Recently we introduced Skyport, an extension to AWE/Shock, that uses Docker container technology to orchestrate and automate the deployment of individual workflow tasks onto the worker machines. The installation of software in independent execution environments for each task reduces complexity and offers an elegant solution to installation problems such as library version conflicts. The systematic use of isolated execution environments for workflow tasks also offers a convenient and simple mechanism to reproduce scientific results.",
"title": ""
},
{
"docid": "756acd9371f7f0c30b10b55742d93730",
"text": "Pseudo-Relevance Feedback (PRF) is an important general technique for improving retrieval effectiveness without requiring any user effort. Several state-of-the-art PRF models are based on the language modeling approach where a query language model is learned based on feedback documents. In all these models, feedback documents are represented with unigram language models smoothed with a collection language model. While collection language model-based smoothing has proven both effective and necessary in using language models for retrieval, we use axiomatic analysis to show that this smoothing scheme inherently causes the feedback model to favor frequent terms and thus violates the IDF constraint needed to ensure selection of discriminative feedback terms. To address this problem, we propose replacing collection language model-based smoothing in the feedback stage with additive smoothing, which is analytically shown to select more discriminative terms. Empirical evaluation further confirms that additive smoothing indeed significantly outperforms collection-based smoothing methods in multiple language model-based PRF models.",
"title": ""
},
{
"docid": "b4ad210acc3ee610379699d1a04e0f20",
"text": "The Nest Thermostat is a smart home automation device that aims to learn a user’s heating and cooling habits to help optimize scheduling and power usage. With its debut in 2011, Nest has proven to be such a success that Google spent $3.2B to acquire the company. However, the complexity of the infrastructure in the Nest Thermostat provides a breeding ground for security vulnerabilities similar to those found in other computer systems. To mitigate this issue, Nest signs firmware updates sent to the device, but the hardware infrastructure lacks proper protection, allowing attackers to install malicious software into the unit. Through a USB connection, we demonstrate how the firmware verification done by the Nest software stack can be bypassed, providing the means to completely alter the behavior of the unit. The compromised Nest Thermostat will then act as a beachhead to attack other nodes within the local network. Also, any information stored within the unit is now available to the attacker, who no longer has to have physical access to the device. Finally, we present a solution to smart device architects and manufacturers aiding the development and deployment of a secure hardware platform.",
"title": ""
},
{
"docid": "a6f8acae5bb5b160b78891c5d9c86fd9",
"text": "Non-computer science majors often struggle to find relevance in traditional computing curricula that tend to emphasize abstract concepts, focus on nonpractical entertainment, or rely on decontextualized settings. BlockPy, a web-based, open access Python programming environment, supports introductory programmers in a data-science context through a dual block/text programming view. The web extra at https://youtu.be/RzaOPqOpMoM illustrates BlockPy features discussed in the article.",
"title": ""
},
{
"docid": "d208033e210816d7a9454749080587d9",
"text": "Graph classification is a problem with practical applications in many different domains. Most of the existing methods take the entire graph into account when calculating graph features. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or subgraphs. In the real-world, however, graphs can be both large and noisy with discriminative patterns confined to certain regions in the graph only. In this work, we study the problem of attentional processing for graph classification. The use of attention allows us to focus on small but informative parts of the graph, avoiding noise in the rest of the graph. We present a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of “interesting” nodes. The model is equipped with an external memory component which allows it to integrate information gathered from different parts of the graph. We demonstrate the effectiveness of the model through various experiments.",
"title": ""
},
{
"docid": "3dc24285dac52753122c0f974da7b069",
"text": "Jeremy Franklin, Richard Galletly, Cat Hines, Charlotte Hogg, Tom Mann, Matthew Manning, Alex Mitchell, Ben Nelson, Raakhi Odedra, Charlotte Pope-Williams, Alice Pugh, Amandeep Rehlon, May Rostom, Emma Sinclair, Anne Wetherilt and Richard Wyatt for their comments and contributions. I would also like to thank Professors Philip Bond and Heidi Johansen-Berg and Alan Milburn and Emma Hardaker-Jones for their insights.",
"title": ""
},
{
"docid": "1053653b3584180dd6f97866c13ce40a",
"text": "• • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the …",
"title": ""
},
{
"docid": "4e182b30dcbc156e2237e7d1d22d5c93",
"text": "A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.",
"title": ""
},
{
"docid": "a88b2916f73dedabceda574f10a93672",
"text": "A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty. KEY WORDS—localization, mapping, visual landmarks, mobile robot",
"title": ""
},
{
"docid": "68a5b5664afe1d75811e5f0346455689",
"text": "Personality, as defined in psychology, accounts for the individual differences in users’ preferences and behaviour. It has been found that there are significant correlations between personality and users’ characteristics that are traditionally used by recommender systems ( e.g. music preferences, social media behaviour, learning styles etc.). Among the many models of personality, the Five Factor Model (FFM) appears suitable for usage in recommender systems as it can be quantitatively measured (i.e. numerical values for each of the factors, namely, openness, conscientiousness, extraversion, agreeableness and neuroticism). The acquisition of the personality factors for an observed user can be done explicitly through questionnaires or implicitly using machine learning techniques with features extracted from social media streams or mobile phone call logs. There are, although limited, a number of available datasets to use in offline recommender systems experiment. Studies have shown that personality was successful at tackling the cold-start problem, making group recommendations, addressing cross-domain preferences4 and at generating diverse recommendations. However, a number of challenges still remain.",
"title": ""
},
{
"docid": "a69600725f25e0e927f8ddeb1d30f99d",
"text": "Island conservation in the longer term Conservation of biodiversity on islands is important globally because islands are home to more than 20% of the terrestrial plant and vertebrate species in the world, within less than 5% of the global terrestrial area. Endemism on islands is a magnitude higher than on continents [1]; ten of the 35 biodiversity hotspots in the world are entirely, or largely consist of, islands [2]. Yet this diversity is threatened: over half of all recent extinctions have occurred on islands, which currently harbor over one-third of all terrestrial species facing imminent extinction [3] (Figure 1). In response to the biodiversity crisis, island conservation has been an active field of research and action. Hundreds of invasive species eradications and endangered species translocations have been successfully completed [4–6]. However, despite climate change being an increasing research focus generally, its impacts on island biodiversity are only just beginning to be investigated. For example, invasive species eradications on islands have been prioritized largely by threats to native biodiversity, eradication feasibility, economic cost, and reinvasion potential, but have never considered the threat of sea-level rise. Yet, the probability and extent of island submersion would provide a relevant metric for the longevity of long-term benefits of such eradications.",
"title": ""
},
{
"docid": "b6d2a57a6c46962a0534f2599b2de56e",
"text": "Complex conjunctions and determiners are often considered as pretokenized units in parsing. This is not always realistic, since they can be ambiguous. We propose a model for joint dependency parsing and multiword expressions identification, in which complex function words are represented as individual tokens linked with morphological dependencies. Our graphbased parser includes standard secondorder features and verbal subcategorization features derived from a syntactic lexicon.We train it on a modified version of the French Treebank enriched with morphological dependencies. It recognizes 81.79% of ADV+que conjunctions with 91.57% precision, and 82.74% of de+DET determiners with 86.70% precision.",
"title": ""
}
] |
scidocsrr
|
c1fbc7638584ca56835f40d365b99333
|
Conception, Evolution, and Application of Functional Programming Languages
|
[
{
"docid": "f2a677515866e995ff8e0e90561d7cbc",
"text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.",
"title": ""
}
] |
[
{
"docid": "58ab999df6099ae98e72a89ec2e97e9d",
"text": "We present an extensive flow-level traffic analysis of the network worm Blaster.A and of the e-mail worm Sobig.F. Based on packet-level measurements with these worms in a testbed we defined flow-level filters. We then extracted the flows that carried malicious worm traffic from AS559 (SWITCH) border router backbone traffic that we had captured in the DDoSVax project. We discuss characteristics and anomalies detected during the outbreak phases, and present an in-depth analysis of partially and completely successful Blaster infections. Detailed flow-level traffic plots of the outbreaks are given. We found a short network test of a Blaster pre-release, significant changes of various traffic parameters, backscatter effects due to non-existent hosts, ineffectiveness of certain temporary port blocking countermeasures, and a surprisingly low frequency of successful worm code transmissions due to Blaster‘s multi-stage nature. Finally, we detected many TCP packet retransmissions due to Sobig.F‘s far too greedy spreading algorithm.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "8db733045dd0689e21f35035f4545eff",
"text": "An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.",
"title": ""
},
{
"docid": "49dc0f1c63cbccf1fac793b8514cb59e",
"text": "The emergence of MIMO antennas and channel bonding in 802.11n wireless networks has resulted in a huge leap in capacity compared with legacy 802.11 systems. This leap, however, adds complexity to selecting the right transmission rate. Not only does the appropriate data rate need to be selected, but also the MIMO transmission technique (e.g., Spatial Diversity or Spatial Multiplexing), the number of streams, and the channel width. Incorporating these features into a rate adaptation (RA) solution requires a new set of rules to accurately evaluate channel conditions and select the appropriate transmission setting with minimal overhead. To address these challenges, we propose ARAMIS (Agile Rate Adaptation for MIMO Systems), a standard-compliant, closed-loop RA solution that jointly adapts rate and bandwidth. ARAMIS adapts transmission rates on a per-packet basis; we believe it is the first 802.11n RA algorithm that simultaneously adapts rate and channel width. We have implemented ARAMIS on Atheros-based devices and deployed it on our 15-node testbed. Our experiments show that ARAMIS accurately adapts to a wide variety of channel conditions with negligible overhead. Furthermore, ARAMIS outperforms existing RA algorithms in 802.11n environments with up to a 10 fold increase in throughput.",
"title": ""
},
{
"docid": "888217f316317429f80cfd278acfb8e5",
"text": "Structural planning is important for producing long sentences, which is a missing part in current language generation models. In this work, we add a planning phase in neural machine translation to control the coarse structure of output sentences. The model first generates some planner codes, then predicts real output words conditioned on them. The codes are learned to capture the coarse structure of the target sentence. In order to obtain the codes, we design an end-to-end neural network with a discretization bottleneck, which predicts the simplified part-of-speech tags of target sentences. Experiments show that the translation performance are generally improved by planning ahead. We also find that translations with different structures can be obtained by manipulating the planner codes.",
"title": ""
},
{
"docid": "5ab7e9ccf859c06a0a2056c78121ff4b",
"text": "Building Information Modelling (BIM) is an expansive knowledge domain within the Design, Construction and Operation (DCO) industry",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "f5311de600d7e50d5c9ecff5c49f7167",
"text": "Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.",
"title": ""
},
{
"docid": "815bd98853e32bdf059277d2b0ce2e6c",
"text": "Choice of right and appropriate database is always crucial for any information system. Since database is an integral and important part, we choose to write the performance analysis of different type of databases in context to health care data. Health care database consists of Electronic Health Records. Also, various Electronic Health Record standards like HL7, openEHR and CEN EN 13606 have been defined for relational, object and object-relational databases. So far, none of the standard has been defined using object-database. In order to do so, we must compare and analyze the performance of object-database over others. In this paper, firstly we have studied the current trends in Indian Health Care with respect to medical data storage and retrieval. Next, we have compared the performance of MySQL (relational) and Db4o (object) database in terms of persistence time and storage space for a sample hospital data of 100 users.",
"title": ""
},
{
"docid": "54637f78527032fef8f3bbc7c7766199",
"text": "In this paper, we study the resource allocation and user scheduling problem for a downlink non-orthogonal multiple access network where the base station allocates spectrum and power resources to a set of users. We aim to jointly optimize the sub-channel assignment and power allocation to maximize the weighted total sum-rate while taking into account user fairness. We formulate the sub-channel allocation problem as equivalent to a many-to-many two-sided user-subchannel matching game in which the set of users and sub-channels are considered as two sets of players pursuing their own interests. We then propose a matching algorithm, which converges to a two-side exchange stable matching after a limited number of iterations. A joint solution is thus provided to solve the sub-channel assignment and power allocation problems iteratively. Simulation results show that the proposed algorithm greatly outperforms the orthogonal multiple access scheme and a previous non-orthogonal multiple access scheme.",
"title": ""
},
{
"docid": "8d197bf27af825b9972a490d3cc9934c",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "542117c3e27d15163b809a528952fb79",
"text": "Predicting the gap between taxi demand and supply in taxi booking apps is completely new and important but challenging. However, manually mining gap rule for different conditions may become impractical because of massive and sparse taxi data. Existing works unilaterally consider demand or supply, used only few simple features and verified by little data, but not predict the gap value. Meanwhile, none of them dealing with missing values. In this paper, we introduce a Double Ensemble Gradient Boosting Decision Tree Model(DEGBDT) to predict taxi gap. (1) Our approach specifically considers demand and supply to predict the gap between them. (2) Also, our method provides a greedy feature ranking and selecting method to exploit most reliable feature. (3) To deal with missing value, our model takes the lead in proposing a double ensemble method, which secondarily integrates different Gradient Boosting Decision Tree(GBDT) model at the different data sparse situation. Experiments on real large-scale dataset demonstrate that our approach can effectively predict the taxi gap than state-of-the-art methods, and shows that double ensemble method is efficacious for sparse data.",
"title": ""
},
{
"docid": "681aba7f37ae6807824c299454af5721",
"text": "Due to their rapid growth and deployment, Internet of things (IoT) devices have become a central aspect of our daily lives. However, they tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can help us secure the IoT devices. However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors. This approach is vulnerable to adversarial attacks since all observations are assumed to be benign while training the anomaly detection model. In this paper, we propose CIoTA, a lightweight framework that utilizes the blockchain concept to perform distributed and collaborative anomaly detection for devices with limited resources. CIoTA uses blockchain to incrementally update a trusted anomaly detection model via self-attestation and consensus among IoT devices. We evaluate CIoTA on our own distributed IoT simulation platform, which consists of 48 Raspberry Pis, to demonstrate CIoTA’s ability to enhance the security of each device and the security of the network as a whole.",
"title": ""
},
{
"docid": "0687cc3d9df74b2ff1dd94d55b773493",
"text": "What should I wear? We present Magic Mirror, a virtual fashion consultant, which can parse, appreciate and recommend the wearing. Magic Mirror is designed with a large display and Kinect to simulate the real mirror and interact with users in augmented reality. Internally, Magic Mirror is a practical appreciation system for automatic aesthetics-oriented clothing analysis. Specifically, we focus on the clothing collocation rather than the single one, the style (aesthetic words) rather than the visual features. We bridge the gap between the visual features and aesthetic words of clothing collocation to enable the computer to learn appreciating the clothing collocation. Finally, both object and subject evaluations verify the effectiveness of the proposed algorithm and Magic Mirror system.",
"title": ""
},
{
"docid": "cea0f4b7409729fd310024d2e9a31b71",
"text": "Relative ranging between Wireless Sensor Network (WSN) nod es is considered to be an important requirement for a number of dis tributed applications. This paper focuses on a two-way, time of flight (ToF) te chnique which achieves good accuracy in estimating the point-to-point di s ance between two wireless nodes. The underlying idea is to utilize a two-way t ime transfer approach in order to avoid the need for clock synchronization b etween the participating wireless nodes. Moreover, by employing multipl e ToF measurements, sub-clock resolution is achieved. A calibration stage is us ed to estimate the various delays that occur during a message exchange and require subtraction from the initial timed value. The calculation of the range betwee n the nodes takes place on-node making the proposed scheme suitable for distribute d systems. Care has been taken to exclude the erroneous readings from the set of m easurements that are used in the estimation of the desired range. The two-way T oF technique has been implemented on commercial off-the-self (COTS) device s without the need for additional hardware. The system has been deployed in var ous experimental locations both indoors and outdoors and the obtained result s reveal that accuracy between 1m RMS and 2.5m RMS in line-of-sight conditions over a 42m range can be achieved.",
"title": ""
},
{
"docid": "a936f3ea3a168c959c775dbb50a5faf2",
"text": "From the Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts. Address correspondence to Dr. Schmahmann, Department of Neurology, VBK 915, Massachusetts General Hospital, Fruit St., Boston, MA 02114; jschmahmann@partners.org (E-mail). Copyright 2004 American Psychiatric Publishing, Inc. Disorders of the Cerebellum: Ataxia, Dysmetria of Thought, and the Cerebellar Cognitive Affective Syndrome",
"title": ""
},
{
"docid": "6cb43a0f16b69cad9a7e5c5a528e23f5",
"text": "New substation technology, such as nonconventional instrument transformers, and a need to reduce design and construction costs are driving the adoption of Ethernet-based digital process bus networks for high-voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include Generic Object Oriented Substation Event, Simple Network Management Protocol, and Sampled Values (SVs). A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP-based traffic segregation method. While this paper focuses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High-volume SV data and time-critical circuit breaker tripping commands do not interact on a full-duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high-voltage switchyards.",
"title": ""
},
{
"docid": "986a2771edc62a5658c0099e5cc0a920",
"text": "Very-low-energy diets (VLEDs) and ketogenic low-carbohydrate diets (KLCDs) are two dietary strategies that have been associated with a suppression of appetite. However, the results of clinical trials investigating the effect of ketogenic diets on appetite are inconsistent. To evaluate quantitatively the effect of ketogenic diets on subjective appetite ratings, we conducted a systematic literature search and meta-analysis of studies that assessed appetite with visual analogue scales before (in energy balance) and during (while in ketosis) adherence to VLED or KLCD. Individuals were less hungry and exhibited greater fullness/satiety while adhering to VLED, and individuals adhering to KLCD were less hungry and had a reduced desire to eat. Although these absolute changes in appetite were small, they occurred within the context of energy restriction, which is known to increase appetite in obese people. Thus, the clinical benefit of a ketogenic diet is in preventing an increase in appetite, despite weight loss, although individuals may indeed feel slightly less hungry (or more full or satisfied). Ketosis appears to provide a plausible explanation for this suppression of appetite. Future studies should investigate the minimum level of ketosis required to achieve appetite suppression during ketogenic weight loss diets, as this could enable inclusion of a greater variety of healthy carbohydrate-containing foods into the diet.",
"title": ""
},
{
"docid": "1a1b1032f25203f5dc0a62bd653606b1",
"text": "The challenges of machining, particularly milling, glass fibre-reinforced polymer (GFRP) composites are their abrasiveness (which lead to excessive tool wear) and susceptible to workpiece damage when improper machining parameters are used. It is imperative that the condition of cutting tool being monitored during the machining process of GFRP composites so as to re-compensating the effect of tool wear on the machined components. Until recently, empirical data on tool wear monitoring of this material during end milling process is still limited in existing literature. Thus, this paper presents the development and evaluation of tool condition monitoring technique using measured machining force data and Adaptive Network-Based Fuzzy Inference Systems during end milling of the GFRP composites. The proposed modelling approaches employ two different data partitioning techniques in improving the predictability of machinability response. Results show that superior predictability of tool wear was observed when using feed force data for both data partitioning techniques. In particular, the ANFIS models were able to match the nonlinear relationship of tool wear and feed force highly effective compared to that of the simple power law of regression trend. This was confirmed through two statistical indices, namely r 2 and root mean square error (RMSE), performed on training as well as checking datasets. The direct contact between cutting tool, workpiece material, and the chips during machining operation imposes extreme thermal and mechanical stresses on the cutting tool. As a result, changes to the geometry, volume loss, and sharpness of the cutting tool, can occur either gradually or abruptly. These changes, which are known as tool wear, normally take place at the rates dependent upon machining conditions, workpiece material, as well as the cutting tool material or geometry. As discussed in earlier research study [1], abrasive wear on the flank face of the cutting tool has been the dominant wear mechanism that influences the tool sharpness during machining of glass fibre-reinforced polymer (GFRP) composites. On the basis of this, reduction of tool sharpness puts constraint on the dimensional accuracies and surface qualities of the composites product. Often, in-service or mechanical performance of poorly machined GFRPs degrades and under the worst circumstances, causes them rejected prior to the end applications. Similar to the case of metallic materials and their metal matrix composite counterparts, it is essential to develop accurate tool wear predictive models as monitoring its condition during machining can extend its useful life. There exists a significant body …",
"title": ""
},
{
"docid": "df20ee9b4d65e104fc090a7c2720a357",
"text": "Contemporary digital game developers offer a variety of games for the diverse tastes of their customers. Although the gaming experience often depends on one's preferences, the same may not apply to the level of their immersion. It has been argued whether the player perspective can influence the level of player's involvement with the game. The aim of this study was to research whether interacting with a game in first person perspective is more immersive than playing in the third person point of view (POV). The set up to test the theory involved participants playing a role-playing game in either mode, naming their preferred perspective, and subjectively evaluating their immersive experience. The results showed that people were more immersed in the game play when viewing the game world through the eyes of the character, regardless of their preferred perspectives.",
"title": ""
}
] |
scidocsrr
|
55ccc41d520f4f6e70af7d53cb3466c0
|
Towards Detecting Wheel-Spinning: Future Failure in Mastery Learning
|
[
{
"docid": "8528335dc5aedb2d6745e237e858a3c9",
"text": "A cognitive model is a set of production rules or skills encoded in intelligent tutors to model how students solve problems. It is usually generated by brainstorming and iterative refinement between subject experts, cognitive scientists and programmers. In this paper we propose a semi-automated method for improving a cognitive model called Learning Factors Analysis that combines a statistical model, human expertise and a combinatorial search. We use this method to evaluate an existing cognitive model and to generate and evaluate alternative models. We present improved cognitive models and make suggestions for improving the intelligent tutor based on those models.",
"title": ""
},
{
"docid": "7209596ad58da21211bfe0ceaaccc72b",
"text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.",
"title": ""
}
] |
[
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "b18ecc94c1f42567b181c49090b03d8a",
"text": "We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject’s potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "eb0b22f209c47b47eacb2c4edc5453f4",
"text": "Current road safety initiatives are approaching the limit of their effectiveness in developed countries. A paradigm shift is needed to address the preventable deaths of thousands on our roads. Previous systems have focused on one or two aspects of driving: environmental sensing, vehicle dynamics or driver monitoring. Our approach is to consider the driver and the vehicle as part of a combined system, operating within the road environment. A driver assistance system is implemented that is not only responsive to the road environment and the driver’s actions but also designed to correlate the driver’s eye gaze with road events to determine the driver’s observations. Driver observation monitoring enables an immediate in-vehicle system able to detect and act on driver inattentiveness, providing the precious seconds for an inattentive human driver to react. We present a prototype system capable of estimating the driver’s observations and detecting driver inattentiveness. Due to the “look but not see” case it is not possible to prove that a road event has been observed by the driver. We show, however, that it is possible to detect missed road events and warn the driver appropriately.",
"title": ""
},
{
"docid": "7641d1576250ed1a7d559cc1ad5ee439",
"text": "Considerados como la base evolutiva vertebrada tras su radiación adaptativa en el Devónico, los peces constituyen en la actualidad el grupo más exitoso y diversificado de vertebrados. Como grupo, este conjunto heterogéneo de organismos representa una aparente encrucijada entre la respuesta inmunitaria innata y la aparición de una respuesta inmunitaria adaptativa. La mayoría de órganos inmunitarios de los mamíferos tienen sus homólogos en los peces. Sin embargo, su eventual menor complejidad estructural podría potencialmente limitar la capacidad para generar una respuesta inmunitaria completamente funcional frente a la invasión de patógenos. Se discute aquí la capacidad de los peces para generar respuestas inmunitarias exitosas, teniendo en cuenta la robustez aparente de la respuesta innata de los peces, en comparación con la observada en vertebrados superiores.",
"title": ""
},
{
"docid": "40315e3e58cd1a4ffe61cc0b66618a5a",
"text": "This paper presents abstract layout techniques for a variety of FPGA switch block architectures. We evaluate the relative density of subset, universal, and Wilton switch block architectures. For subset switch blocks of small size, we find the optimal implementations using a simple metric. We also develop a tractable heuristic that returns the optimal results for small switch blocks, and good results for large switch blocks. For switch blocks with general connectivity, we develop a representation and a layout evaluation technique. We use these techniques to compare a variety of small switch blocks. We find that the traditional Xilinx-style, subset switch block is superior to the other proposed architectures. Finally, we have hand-designed some small switch blocks to confirm our results.",
"title": ""
},
{
"docid": "309dee96492cf45ed2887701b27ad3ee",
"text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.",
"title": ""
},
{
"docid": "e749b355c41ca254a0ee249d7c4e9ab1",
"text": "This paper explores a framework to permit the creation of modules as part of a robot creation and combat game. We explore preliminary work that offers a design solution to generate and test robots comprised of modular components. This current implementation, which is reliant on a constraint-driven process is then assessed to indicate the expressive range of content it can create and the total number of unique combinations it can establish.",
"title": ""
},
{
"docid": "a3021314be56a795c6aa287e9701b780",
"text": "OBJECTIVE\nTo evaluate the usefulness of the Pediatric Symptom Checklist (PSC) in identifying behavioral problems in low-income, Mexican American children.\n\n\nDESIGN\nA cross-sectional study design was used to examine the PSC as a screening test, with the Child Behavior Checklist (CBCL) as the criterion standard.\n\n\nSETTING\nThe study was conducted at a health center in a diverse low-income community. Patients Eligible patients were children and adolescents, 4 to 16 years of age, who were seen for nonemergent, well-child care. Of 253 eligible children during a 9-month study period, 210 agreed to participate in the study. There was a 100% completion rate of the questionnaires. The average age of the children was 7.5 years, and 45% were female. Ninety-five percent of patients were of Hispanic descent (Mexican American); 86% of families spoke only Spanish. Socioeconomic status was low (more than three fourths of families earned <$20 000 annually).\n\n\nRESULTS\nThe CBCL Total scale determined that 27 (13%) of the children had clinical levels of behavioral problems. With a cutoff score of 24, the PSC screened 2 (1%) of the 210 children as positive for behavioral problems. Using the CBCL as the criterion standard, the PSC sensitivity was 7.4%, and the specificity was 100%. Receiver operator characteristic analysis determined that a PSC cutoff score of 12 most correctly classified children with and without behavioral problems (sensitivity, 0.74; specificity, 0.94).\n\n\nCONCLUSIONS\nWhen using the PSC, a new cutoff score of 12 for clinical significance should be considered if screening low-income, Mexican American children for behavioral problems. Additional study is indicated to determine the causes of the PSC's apparently lower sensitivity in Mexican American populations.",
"title": ""
},
{
"docid": "6b6805fa87d31f374a1db8da8acc2163",
"text": "BACKGROUND\nWhile Web-based interventions can be efficacious, engaging a target population's attention remains challenging. We argue that strategies to draw such a population's attention should be tailored to meet its needs. Increasing user engagement in online suicide intervention development requires feedback from this group to prevent people who have suicide ideation from seeking treatment.\n\n\nOBJECTIVE\nThe goal of this study was to solicit feedback on the acceptability of the content of messaging from social media users with suicide ideation. To overcome the common concern of lack of engagement in online interventions and to ensure effective learning from the message, this research employs a customized design of both content and length of the message.\n\n\nMETHODS\nIn study 1, 17 participants suffering from suicide ideation were recruited. The first (n=8) group conversed with a professional suicide intervention doctor about its attitudes and suggestions for a direct message intervention. To ensure the reliability and consistency of the result, an identical interview was conducted for the second group (n=9). Based on the collected data, questionnaires about this intervention were formed. Study 2 recruited 4222 microblog users with suicide ideation via the Internet.\n\n\nRESULTS\nThe results of the group interviews in study 1 yielded little difference regarding the interview results; this difference may relate to the 2 groups' varied perceptions of direct message design. However, most participants reported that they would be most drawn to an intervention where they knew that the account was reliable. Out of 4222 microblog users, we received responses from 725 with completed questionnaires; 78.62% (570/725) participants were not opposed to online suicide intervention and they valued the link for extra suicide intervention information as long as the account appeared to be trustworthy. Their attitudes toward the intervention and the account were similar to those from study 1, and 3 important elements were found pertaining to the direct message: reliability of account name, brevity of the message, and details of the phone numbers of psychological intervention centers and psychological assessment.\n\n\nCONCLUSIONS\nThis paper proposed strategies for engaging target populations in online suicide interventions.",
"title": ""
},
{
"docid": "6d699c8c41db2bd702002765b0342a31",
"text": "This paper aims to describe different approaches for studying the overall diet with advantages and limitations. Studies of the overall diet have emerged because the relationship between dietary intake and health is very complex with all kinds of interactions. These cannot be captured well by studying single dietary components. Three main approaches to study the overall diet can be distinguished. The first method is researcher-defined scores or indices of diet quality. These are usually based on guidelines for a healthy diet or on diets known to be healthy. The second approach, using principal component or cluster analysis, is driven by the underlying dietary data. In principal component analysis, scales are derived based on the underlying relationships between food groups, whereas in cluster analysis, subgroups of the population are created with people that cluster together based on their dietary intake. A third approach includes methods that are driven by a combination of biological pathways and the underlying dietary data. Reduced rank regression defines linear combinations of food intakes that maximally explain nutrient intakes or intermediate markers of disease. Decision tree analysis identifies subgroups of a population whose members share dietary characteristics that influence (intermediate markers of) disease. It is concluded that all approaches have advantages and limitations and essentially answer different questions. The third approach is still more in an exploration phase, but seems to have great potential with complementary value. More insight into the utility of conducting studies on the overall diet can be gained if more attention is given to methodological issues.",
"title": ""
},
{
"docid": "8d56921f91355737bec7f2c281f15f10",
"text": "We present an upper-body exoskeleton for rehabilitation, called Harmony, that provides natural coordinated motions on the shoulder with a wide range of motion, and force and impedance controllability. The exoskeleton consists of an anatomical shoulder mechanism with five active degrees of freedom, and one degree of freedom elbow and wrist mechanisms powered by series elastic actuators. The dynamic model of the exoskeleton is formulated using a recursive Newton–Euler algorithm with spatial dynamics representation. A baseline control algorithm is developed to achieve dynamic transparency and scapulohumeral rhythm assistance, and the coupled stability of the robot–human system at the baseline control is investigated. Experiments were conducted to evaluate the kinematic and dynamic characteristics of the exoskeleton. The results show that the exoskeleton exhibits good kinematic compatibility to the human body with a wide range of motion and performs task-space force and impedance control behaviors reliably.",
"title": ""
},
{
"docid": "1f27caaaeae8c82db6a677f66f2dee74",
"text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"title": ""
},
{
"docid": "6987e20daf52bcf25afe6a7f0a95a730",
"text": "Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.",
"title": ""
},
{
"docid": "c55ddf94419271b6eed9358684750ca4",
"text": "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard technique for pruning weights naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the lottery ticket hypothesis: dense, randomly-initialized feed-forward networks contain subnetworks (winning tickets) that—when trained in isolation—arrive at comparable test accuracy in a comparable number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Furthermore, the winning tickets we find above that size learn faster than the original network and exhibit higher test accuracy.",
"title": ""
},
{
"docid": "7ca7bca5a704681e8b8c7d213c6ad990",
"text": "Three experiments in naming Chinese characters are presented here to address the relationships between character frequency, consistency, and regularity effects in Chinese character naming. Significant interactions between character consistency and frequency were found across the three experiments, regardless of whether the phonetic radical of the phonogram is a legitimate character in its own right or not. These findings suggest that the phonological information embedded in Chinese characters has an influence upon the naming process of Chinese characters. Furthermore, phonetic radicals exist as computation units mainly because they are structures occurring systematically within Chinese characters, not because they can function as recognized, freestanding characters. On the other hand, the significant interaction between regularity and consistency found in the first experiment suggests that these two factors affect Chinese character naming in different ways. These findings are accounted for within interactive activation frameworks and a connectionist model.",
"title": ""
},
{
"docid": "41b7e610e0aa638052f71af1902e92d5",
"text": "This work investigates how social bots can phish employees of organizations, and thus endanger corporate network security. Current literature mostly focuses on traditional phishing methods (through e-mail, phone calls, and USB sticks). We address the serious organizational threats and security risks caused by phishing through online social media, specifically through Twitter. This paper first provides a review of current work. It then describes our experimental development, in which we created and deployed eight social bots on Twitter, each associated with one specific subject. For a period of four weeks, each bot published tweets about its subject and followed people with similar interests. In the final two weeks, our experiment showed that 437 unique users could have been phished, 33 of which visited our website through the network of an organization. Without revealing any sensitive or real data, the paper analyses some findings of this experiment and addresses further plans for research in this area.",
"title": ""
},
{
"docid": "b1b18ffff0f9efdef25dd15099139b7e",
"text": "This paper presents a fast and accurate alignment method for polyphonic symbolic music signals. It is known that to accurately align piano performances, methods using the voice structure are needed. However, such methods typically have high computational cost and they are applicable only when prior voice information is given. It is pointed out that alignment errors are typically accompanied by performance errors in the aligned signal. This suggests the possibility of correcting (or realigning) preliminary results by a fast (but not-so-accurate) alignment method with a refined method applied to limited segments of aligned signals, to save the computational cost. To realise this, we develop a method for detecting performance errors and a realignment method that works fast and accurately in local regions around performance errors. To remove the dependence on prior voice information, voice separation is performed to the reference signal in the local regions. By applying our method to results obtained by previously proposed hidden Markov models, the highest accuracies are achieved with short computation time. Our source code is published in the accompanying web page, together with a user interface to examine and correct alignment results.",
"title": ""
},
{
"docid": "1bfe0c412abc11eeb664ad741a4239fa",
"text": "The Medium Access Control (MAC) protocol through which mobile stations can share a common broadcast channel is essential in an ad-hoc network. Due to the existence of hidden terminal problem, partially-connected network topology and lack of central administration, existing popular MAC protocols like IEEE 802.11 Distributed Foundation Wireless Medium Access Control (DFWMAC) [1] may lead to \"capture\" effects which means that some stations grab the shared channel and other stations suffer from starvation. This is also known as the \"fairness problem\". This paper reviews some related work in the literature and proposes a general approach to address the problem. This paper borrows the idea of fair queueing from wireline networks and defines the \"fairness index\" for ad-hoc network to quantify the fairness, so that the goal of achieving fairness becomes equivalent to minimizing the fairness index. Then this paper proposes a different backoff scheme for IEEE 802.11 DFWMAC, instead of the original binary exponential backoff scheme. Simulation results show that the new backoff scheme can achieve far better fairness without loss of simplicity.",
"title": ""
},
{
"docid": "7dd3183ee59b800f3391f893d3578d64",
"text": "This paper reports on a bio-inspired angular accelerometer based on a two-mask microfluidic process using a PDMS mold. The sensor is inspired by the semicircular canals in mammalian vestibular systems and pairs a fluid-filled microtorus with a thermal detection principle based on thermal convection. With inherent linear acceleration insensitivity, the sensor features a sensitivity of 29.8μV/deg/s2=1.7mV/rad/s2, a dynamic range of 14,000deg/s2 and a detection limit of ~20deg/s2.",
"title": ""
},
{
"docid": "81e49c8763f390e4b86968ff91214b5a",
"text": "Choreographies allow business and service architects to specify with a global perspective the requirements of applications built over distributed and interacting software entities. While being a standard for the abstract specification of business workflows and collaboration between services, the Business Process Modeling Notation (BPMN) has only been recently extended into BPMN 2.0 to support an interaction model of choreography, which, as opposed to interconnected interface models, is better suited to top-down development processes. An important issue with choreographies is real-izability, i.e., whether peers obtained via projection from a choreography interact as prescribed in the choreography requirements. In this work, we propose a realizability checking approach for BPMN 2.0 choreographies. Our approach is formally grounded on a model transformation into the LOTOS NT process algebra and the use of equivalence checking. It is also completely tool-supported through interaction with the Eclipse BPMN 2.0 editor and the CADP process algebraic toolbox.",
"title": ""
}
] |
scidocsrr
|
14ff962a331c1ac83e33763d4933a3c2
|
Things rank and gross in nature: a review and synthesis of moral disgust.
|
[
{
"docid": "0f3a795be7101977171a9232e4f98bf4",
"text": "Emotions are universally recognized from facial expressions--or so it has been claimed. To support that claim, research has been carried out in various modern cultures and in cultures relatively isolated from Western influence. A review of the methods used in that research raises questions of its ecological, convergent, and internal validity. Forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic. When they are altered, less supportive or nonsupportive results occur. When they are combined, these method factors may help to shape the results. Facial expressions and emotion labels are probably associated, but the association may vary with culture and is loose enough to be consistent with various alternative accounts, 8 of which are discussed.",
"title": ""
},
{
"docid": "f6ba57b277beb545ad9b396404cd56b9",
"text": "The orbitofrontal cortex contains the secondary taste cortex, in which the reward value of taste is represented. It also contains the secondary and tertiary olfactory cortical areas, in which information about the identity and also about the reward value of odours is represented. The orbitofrontal cortex also receives information about the sight of objects from the temporal lobe cortical visual areas, and neurons in it learn and reverse the visual stimulus to which they respond when the association of the visual stimulus with a primary reinforcing stimulus (such as taste) is reversed. This is an example of stimulus-reinforcement association learning, and is a type of stimulus-stimulus association learning. More generally, the stimulus might be a visual or olfactory stimulus, and the primary (unlearned) positive or negative reinforcer a taste or touch. A somatosensory input is revealed by neurons that respond to the texture of food in the mouth, including a population that responds to the mouth feel of fat. In complementary neuroimaging studies in humans, it is being found that areas of the orbitofrontal cortex are activated by pleasant touch, by painful touch, by taste, by smell, and by more abstract reinforcers such as winning or losing money. Damage to the orbitofrontal cortex can impair the learning and reversal of stimulus-reinforcement associations, and thus the correction of behavioural responses when there are no longer appropriate because previous reinforcement contingencies change. The information which reaches the orbitofrontal cortex for these functions includes information about faces, and damage to the orbitofrontal cortex can impair face (and voice) expression identification. This evidence thus shows that the orbitofrontal cortex is involved in decoding and representing some primary reinforcers such as taste and touch; in learning and reversing associations of visual and other stimuli to these primary reinforcers; and in controlling and correcting reward-related and punishment-related behavior, and thus in emotion. The approach described here is aimed at providing a fundamental understanding of how the orbitofrontal cortex actually functions, and thus in how it is involved in motivational behavior such as feeding and drinking, in emotional behavior, and in social behavior.",
"title": ""
}
] |
[
{
"docid": "d7b5b38e73ca4c58c3b104c926ada90a",
"text": "OBJECTIVE\nTo evaluate cyclosporine 0.1% ophthalmic emulsion over a 1- to 3-year period in moderate to severe dry eye disease patients.\n\n\nDESIGN\nNonrandomized, multicenter, open-label clinical trial extending 2 ophthalmic cyclosporine phase III clinical trials.\n\n\nPARTICIPANTS\nFour hundred twelve patients previously dosed for 6 to 12 months with cyclosporine 0.05% or 0.1% in prior phase III trials.\n\n\nINTERVENTION\nPatients instilled ophthalmic cyclosporine 0.1% twice daily into both eyes for up to 3 consecutive 12-month extension periods.\n\n\nMAIN OUTCOME MEASURES\nCorneal staining, Schirmer tests, and symptom severity assessments were conducted during the first 12-month extension, with a patient survey during the second 12-month extension. Biomicroscopy and visual acuity (VA) examinations, intraocular pressure (IOP) measurements, and adverse effects queries occurred at 6-month intervals.\n\n\nRESULTS\nMean duration of treatment was 19.8 months. Improvements in objective and subjective measures of dry eye disease were modest, probably because of prior treatment with cyclosporine. Most survey respondents said their symptoms began to resolve in the first 3 months of cyclosporine treatment during the previous phase III clinical trials. At study exit, VA decreased in 12.6% (93/738) and increased in 5.4% (40/738) of eyes by > or =2 lines; severity of biomicroscopy findings increased in 3.4% (chemosis; 26/760), 7.2% (conjunctival hyperemia; 55/760), or 8.5% (tear film debris; 64/756) of eyes; and mean IOP increased 0.18 mmHg relative to baseline. The most common treatment-related adverse events were burning (10.9% of patients [45/412]), stinging (3.9% [16/412]), and conjunctival hyperemia (3.4% [14/412]). No serious treatment-related adverse events occurred. Most patients (95.2% [140/147]) said they would continue cyclosporine therapy; 97.9% (143/146) would recommend it to other dry eye patients.\n\n\nCONCLUSIONS\nTherapy of chronic dry eye disease with cyclosporine 0.1% ophthalmic emulsion for 1 to 3 years was safe, well tolerated, and not associated with systemic side effects. The results supplement the safety record of the commercially available cyclosporine 0.05% ophthalmic emulsion.",
"title": ""
},
{
"docid": "8969ec0fd5a1bf7f49d35d3b0c9bef50",
"text": "On the morning of September 11, 2001, the United States and the Western world entered into a new era – one in which large scale terrorist acts are to be expected. The impacts of the new era will challenge supply chain managers to adjust relations with suppliers and customers, contend with transportation difficulties and amend inventory management strategies. This paper looks at the twin corporate challenges of (i) preparing to deal with the aftermath of terrorist attacks and (ii) operating under heightened security. The first challenge involves setting certain operational redundancies. The second means less reliable lead times and less certain demand scenarios. In addition, the paper looks at how companies should organize to meet those challenges efficiently and suggests a new public-private partnership. While the paper is focused on the US, it has worldwide implications.",
"title": ""
},
{
"docid": "e9408e07cae42790c23322467778e409",
"text": "We present an atomic-scale teleoperation system that uses a head-mounted display and force-feedback manipulator arm for a user interface and a Scanning Tunneling Microscope (STM) as a sensor and effector. The system approximates presence at the atomic scale, placing the scientist on the surface, in control, w h i l e the experiment is happening. A scientist using the Nanomanipulator can view incoming STM data, feel the surface, and modify the surface (using voltage pulses) in real time. The Nanomanipulator has been used to study the effects of bias pulse duration on the creation of gold mounds. We intend to use the system to make controlled modifications to silicon surfaces. CR Categories: C.3 (Special-purpose and application-based systems), 1.3.7 (Virtual reality), J.2 (Computer Applications Physical Sciences)",
"title": ""
},
{
"docid": "b5788c52127d2ef06df428d758f1a225",
"text": "Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> (typically, <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is small and is equal to <inline-formula> <tex-math notation=\"LaTeX\">$ W$ </tex-math></inline-formula>, e.g., <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is 5 or 7). Generally, the size of the filter is equal to the size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> is smaller than <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula>. The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> and is aimed at extracting features from spatial domain. The second one is of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ 1\\times 1 $ </tex-math></inline-formula> and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called the cascaded subpatch network (CSNet). The feature layer generated by CSNet is called the <italic>csconv</italic> layer. For the whole input image, we construct a deep neural network by stacking a sequence of <italic>csconv</italic> layers. Experimental results on five benchmark data sets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 data set without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 data set.",
"title": ""
},
{
"docid": "98f1e9888b9b6f17dd91153b906c0569",
"text": "Irumban puli (Averrhoa bilimbi) is commonly used as a traditional remedy in the state of Kerala. Freshly made concentrated juice has a very high oxalic acid content and consumption carries a high risk of developing acute renal failure (ARF) by deposition of calcium oxalate crystals in renal tubules. Acute oxalate nephropathy (AON) due to secondary oxalosis after consumption of Irumban puli juice is uncommon. AON due to A. bilimbi has not been reported before. We present a series of ten patients from five hospitals in the State of Kerala who developed ARF after intake of I. puli fruit juice. Seven patients needed hemodialysis whereas the other three improved with conservative management.",
"title": ""
},
{
"docid": "707a31c60288fc2873bb37544bb83edf",
"text": "The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed the skill level of intermediate amateurs simply using supervised learning. Further training and implementation of non-rectangular convolutions and reinforcement learning will likely increase this skill level much further.",
"title": ""
},
{
"docid": "787377fc8e1f9da5ec2b6ea77bcc0725",
"text": "We show that the counting class LWPP [8] remains unchanged even if one allows a polynomial number of gap values rather than one. On the other hand, we show that it is impossible to improve this from polynomially many gap values to a superpolynomial number of gap values by relativizable proof techniques. The first of these results implies that the Legitimate Deck Problem (from the study of graph reconstruction) is in LWPP (and thus low for PP, i.e., PPLegitimate Deck = PP) if the weakened version of the Reconstruction Conjecture holds in which the number of nonisomorphic preimages is assumed merely to be polynomially bounded. This strengthens the 1992 result of Köbler, Schöning, and Torán [15] that the Legitimate Deck Problem is in LWPP if the Reconstruction Conjecture holds, and provides strengthened evidence that the Legitimate Deck Problem is not NP-hard. We additionally show on the one hand that our main LWPP robustness result also holds for WPP, and also holds even when one allows both the rejectionand acceptancegap-value targets to simultaneously be polynomial-sized lists; yet on the other hand, we show that for the #P-based analog of LWPP the behavior much differs in that, in some relativized worlds, even two target values already yield a richer class than one value does. 2012 ACM Subject Classification Theory of computation → Complexity classes",
"title": ""
},
{
"docid": "7111c220a28d7a6fab32d9ecc914c5aa",
"text": "Songbirds are one of the best-studied examples of vocal learners. Learning of both human speech and birdsong depends on hearing. Once learned, adult song in many species remains unchanging, suggesting a reduced influence of sensory experience. Recent studies have revealed, however, that adult song is not always stable, extending our understanding of the mechanisms involved in song maintenance, and their similarity to those active during song learning. Here we review some of the processes that contribute to song learning and production, with an emphasis on the role of auditory feedback. We then consider some of the possible neural substrates involved in these processes, particularly basal ganglia circuitry. Although a thorough treatment of human speech is beyond the scope of this article, we point out similarities between speech and song learning, and ways in which studies of these disparate behaviours complement each other in developing an understanding of general principles that contribute to learning and maintenance of vocal behaviour.",
"title": ""
},
{
"docid": "7c9e89cb3384a34195fd6035cd2e75a0",
"text": "Manual analysis of pedestrians and crowds is often impractical for massive datasets of surveillance videos. Automatic tracking of humans is one of the essential abilities for computerized analysis of such videos. In this keynote paper, we present two state of the art methods for automatic pedestrian tracking in videos with low and high crowd density. For videos with low density, first we detect each person using a part-based human detector. Then, we employ a global data association method based on Generalized Graphs for tracking each individual in the whole video. In videos with high crowd-density, we track individuals using a scene structured force model and crowd flow modeling. Additionally, we present an alternative approach which utilizes contextual information without the need to learn the structure of the scene. Performed evaluations show the presented methods outperform the currently available algorithms on several benchmarks.",
"title": ""
},
{
"docid": "4054713a00a9a2af6eb65f56433a943e",
"text": "The question why deep learning algorithms perform so well in practice has attracted increasing research interest. However, most of well-established approaches, such as hypothesis capacity, robustness or sparseness, have not provided complete explanations, due to the high complexity of the deep learning algorithms and their inherent randomness. In this work, we introduce a new approach – ensemble robustness – towards characterizing the generalization performance of generic deep learning algorithms. Ensemble robustness concerns robustness of the population of the hypotheses that may be output by a learning algorithm. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbation is bounded in average, or equivalently, the performance variance of the algorithm is small. Quantifying ensemble robustness of various deep learning algorithms may be difficult analytically. However, extensive simulations for seven common deep learning algorithms for different network architectures provide supporting evidence for our claims. Furthermore, our work explains the good performance of several published deep learning algorithms.",
"title": ""
},
{
"docid": "29927f1734181696965ea246df8a757a",
"text": "In this paper, we investigate lossy compression of deep neural networks (DNNs) 1 by weight quantization and lossless source coding for memory-efficient deploy2 ment. Whereas the previous work addressed non-universal scalar quantization and 3 entropy coding of DNN weights, we for the first time introduce universal DNN 4 compression by universal vector quantization and universal source coding. In 5 particular, we examine universal randomized lattice quantization of DNNs, which 6 randomizes DNN weights by uniform random dithering before lattice quantization 7 and can perform near-optimally on any source without relying on knowledge of 8 its probability distribution. Moreover, we present a method of fine-tuning vector 9 quantized DNNs to recover the performance loss after quantization. Our experi10 mental results show that the proposed universal DNN compression scheme com11 presses the 32-layer ResNet (trained on CIFAR-10) and the AlexNet (trained on 12 ImageNet) with compression ratios of 47.1 and 42.5, respectively. 13",
"title": ""
},
{
"docid": "d13065e86b110367add085d6da2e0345",
"text": "Although compile-time optimizations generally improve program performance, degradations caused by individual techniques are to be expected. One promising research direction to overcome this problem is the development of dynamic, feedback-directed optimization orchestration algorithms, which automatically search for the combination of optimization techniques that achieves the best program performance. The challenge is to develop an orchestration algorithm that finds, in an exponential search space, a solution that is close to the best, in acceptable time. In this paper, we build such a fast and effective algorithm, called Combined Elimination (CE). The key advance of CE over existing techniques is that it takes the least tuning time (57% of the closest alternative), while achieving the same program performance. We conduct the experiments on both a Pentium IV machine and a SPARC II machine, by measuring performance of SPEC CPU2000 benchmarks under a large set of 38 GCC compiler options. Furthermore, through orchestrating a small set of optimizations causing the most degradation, we show that the performance achieved by CE is close to the upper bound obtained by an exhaustive search algorithm. The gap is less than 0.2% on average.",
"title": ""
},
{
"docid": "63b04046e1136290a97f885783dda3bd",
"text": "This paper considers the design of secondary wireless mesh networks which use leased frequency channels. In a given geographic region, the available channels are individually priced and leased exclusively through a primary spectrum owner. The usage of each channel is also subject to published interference constraints so that the primary user is not adversely affected. When the network is designed and deployed, the secondary user would like to minimize the costs of using the required resources while satisfying its own traffic and interference requirements. This problem is formulated as a mixed integer optimization which gives the optimum deployment cost as a function of the secondary node positioning, routing, and frequency allocations. Because of the problem's complexity, the optimum result can only be found for small problem sizes. To accommodate more practical deployments, two algorithms are proposed and their performance is compared to solutions obtained from the optimization. The first algorithm is a greedy flow-based scheme (GFB) which iterates over the individual node flows based on solving a much simpler optimization at each step. The second algorithm (ILS) uses an iterated local search whose initial solution is based on constrained shortest path routing. Our results show that the proposed algorithms perform well for a variety of network scenarios.",
"title": ""
},
{
"docid": "aaf1aac789547c1bf2f918368b43c955",
"text": "Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g. strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture. Similar sections of music can be detected by clustering segments with similar average textures. The repetition of a sequence of music often marks a logical segment. Repeated phrases and hierarchical structures can be discovered by finding similar sequences of feature vectors within a piece of music. Structure analysis can be used to construct music summaries and to assist music browsing. Introduction Probably everyone would agree that music has structure, but most of the interesting musical information that we perceive lies hidden below the complex surface of the audio signal. From this signal, human listeners perceive vocal and instrumental lines, orchestration, rhythm, harmony, bass lines, and other features. Unfortunately, music audio signals have resisted our attempts to extract this kind of information. Researchers are making progress, but so far, computers have not come near to human levels of performance in detecting notes, processing rhythms, or identifying instruments in a typical (polyphonic) music audio texture. On a longer time scale, listeners can hear structure including the chorus and verse in songs, sections in other types of music, repetition, and other patterns. One might think that without the reliable detection and identification of short-term features such as notes and their sources, that it would be impossible to deduce any information whatsoever about even higher levels of abstraction. Surprisingly, it is possible to automatically detect a great deal of information concerning music structure. For example, it is possible to label the structure of a song as AABA, meaning that opening material (the “A” part) is repeated once, then contrasting material (the “B” part) is played, and then the opening material is played again at the end. This structural description may be deduced from low-level audio signals. Consequently, a computer might locate the “chorus” of a song without having any representation of the melody or rhythm that characterizes the chorus. Underlying almost all work in this area is the concept that structure is induced by the repetition of similar material. This is in contrast to, say, speech recognition, where there is a common understanding of words, their structure, and their meaning. A string of unique words can be understood using prior knowledge of the language. Music, however, has no language or dictionary (although there are certainly known forms and conventions). In general, structure can only arise in music through repetition or systematic transformations of some kind. Repetition implies there is some notion of similarity. Similarity can exist between two points in time (or at least two very short time intervals), similarity can exist between two sequences over longer time intervals, and similarity can exist between the longer-term statistical behaviors of acoustical features. Different approaches to similarity will be described. Similarity can be used to segment music: contiguous regions of similar music can be grouped together into segments. Segments can then be grouped into clusters. The segmentation of a musical work and the grouping of these segments into clusters is a form of analysis or “explanation” of the music. R. Dannenberg and M. Goto Music Structure 16 April 2005 2 Features and Similarity Measures A variety of approaches are used to measure similarity, but it should be clear that a direct comparison of the waveform data or individual samples will not be useful. Large differences in waveforms can be imperceptible, so we need to derive features of waveform data that are more perceptually meaningful and compare these features with an appropriate measure of similarity. Feature Vectors for Spectrum, Texture, and Pitch Different features emphasize different aspects of the music. For example, mel-frequency cepstral coefficients (MFCCs) seem to work well when the general shape of the spectrum but not necessarily pitch information is important. MFCCs generally capture overall “texture” or timbral information (what instruments are playing in what general pitch range), but some pitch information is captured, and results depend upon the number of coefficients used as well as the underlying musical signal. When pitch is important, e.g. when searching for similar harmonic sequences, the chromagram is effective. The chromagram is based on the idea that tones separated by octaves have the same perceived value of chroma (Shepard 1964). Just as we can describe the chroma aspect of pitch, the short term frequency spectrum can be restructured into the chroma spectrum by combining energy at different octaves into just one octave. The chroma vector is a discretized version of the chroma spectrum where energy is summed into 12 log-spaced divisions of the octave corresponding to pitch classes (C, C#, D, ... B). By analogy to the spectrogram, the discrete chromagram is a sequence of chroma vectors. It should be noted that there are several variations of the chromagram. The computation typically begins with a short-term Fourier transform (STFT) which is used to compute the magnitude spectrum. There are different ways to “project” this onto the 12-element chroma vector. Each STFT bin can be mapped directly to the most appropriate chroma vector element (Bartsch and Wakefield 2001), or the STFT bin data can be interpolated or windowed to divide the bin value among two neighboring vector elements (Goto 2003a). Log magnitude values can be used to emphasize the presence of low-energy harmonics. Values can also be averaged, summed, or the vector can be computed to conserve the total energy. The chromagram can also be computed by using the Wavelet transform. Regardless of the exact details, the primary attraction of the chroma vector is that, by ignoring octaves, the vector is relatively insensitive to overall spectral energy distribution and thus to timbral variations. However, since fundamental frequencies and lower harmonics of tones feature prominently in the calculation of the chroma vector, it is quite sensitive to pitch class content, making it ideal for the detection of similar harmonic sequences in music. While MFCCs and chroma vectors can be calculated from a single short term Fourier transform, features can also be obtained from longer sequences of spectral frames. Tzanetakis and Cook (1999) use means and variances of a variety of features in a one second window. The features include the spectral centroid, spectral rolloff, spectral flux, and RMS energy. Peeters, La Burthe, and Rodet (2002) describe “dynamic” features, which model the variation of the short term spectrum over windows of about one second. In this approach, the audio signal is passed through a bank of Mel filters. The time-varying magnitudes of these filter outputs are each analyzed by a short term Fourier transform. The resulting set of features, the Fourier coefficients from each Mel filter output, is large, so a supervised learning scheme is used to find features that maximize the mutual information between feature values and hand-labeled music structures. Measures of Similarity Given a feature vector such as the MFCC or chroma vector, some measure of similarity is needed. One possibility is to compute the (dis)similarity using the Euclidean distance between feature vectors. Euclidean distance will be dependent upon feature magnitude, which is often a measure of the overall R. Dannenberg and M. Goto Music Structure 16 April 2005 3 music signal energy. To avoid giving more weight to the louder moments of music, feature vectors can be normalized, for example, to a mean of zero and a standard deviation of one or to a maximum element of one. Alternatively, similarity can be measured using the scalar (dot) product of the feature vectors. This measure will be larger when feature vectors have a similar direction. As with Euclidean distance, the scalar product will also vary as a function of the overall magnitude of the feature vectors. If the dot product is normalized by the feature vector magnitudes, the result is equal to the cosine of the angle between the vectors. If the feature vectors are first normalized to have a mean of zero, the cosine angle is equivalent to the correlation, another measure that has been used with success. Lu, Wang, and Zhang (Lu, Wang, and Zhang 2004) use a constant-Q transform (CQT), and found that CQT outperforms chroma and MFCC features using a cosine distance measure. They also introduce a “structure-based” distance measure that takes into account the harmonic structure of spectra to emphasize pitch similarity over timbral similarity, resulting in additional improvement in a music structure analysis task. Similarity can be calculated between individual feature vectors, as suggested above, but similarity can also be computed over a window of feature vectors. The measure suggested by Foote (1999) is vector correlation:",
"title": ""
},
{
"docid": "e864bccfa711a5e773390524cd826808",
"text": "Semantic similarity measures estimate the similarity between concepts, and play an important role in many text processing tasks. Approaches to semantic similarity in the biomedical domain can be roughly divided into knowledge based and distributional based methods. Knowledge based approaches utilize knowledge sources such as dictionaries, taxonomies, and semantic networks, and include path finding measures and intrinsic information content (IC) measures. Distributional measures utilize, in addition to a knowledge source, the distribution of concepts within a corpus to compute similarity; these include corpus IC and context vector methods. Prior evaluations of these measures in the biomedical domain showed that distributional measures outperform knowledge based path finding methods; but more recent studies suggested that intrinsic IC based measures exceed the accuracy of distributional approaches. Limitations of previous evaluations of similarity measures in the biomedical domain include their focus on the SNOMED CT ontology, and their reliance on small benchmarks not powered to detect significant differences between measure accuracy. There have been few evaluations of the relative performance of these measures on other biomedical knowledge sources such as the UMLS, and on larger, recently developed semantic similarity benchmarks. We evaluated knowledge based and corpus IC based semantic similarity measures derived from SNOMED CT, MeSH, and the UMLS on recently developed semantic similarity benchmarks. Semantic similarity measures based on the UMLS, which contains SNOMED CT and MeSH, significantly outperformed those based solely on SNOMED CT or MeSH across evaluations. Intrinsic IC based measures significantly outperformed path-based and distributional measures. We released all code required to reproduce our results and all tools developed as part of this study as open source, available under http://code.google.com/p/ytex . We provide a publicly-accessible web service to compute semantic similarity, available under http://informatics.med.yale.edu/ytex.web/ . Knowledge based semantic similarity measures are more practical to compute than distributional measures, as they do not require an external corpus. Furthermore, knowledge based measures significantly and meaningfully outperformed distributional measures on large semantic similarity benchmarks, suggesting that they are a practical alternative to distributional measures. Future evaluations of semantic similarity measures should utilize benchmarks powered to detect significant differences in measure accuracy.",
"title": ""
},
{
"docid": "a330c7ec22ab644404bbb558158e69e7",
"text": "With the advance in both hardware and software technologies, automated data generation and storage has become faster than ever. Such data is referred to as data streams. Streaming data is ubiquitous today and it is often a challenging task to store, analyze and visualize such rapid large volumes of data. Most conventional data mining techniques have to be adapted to run in a streaming environment, because of the underlying resource constraints in terms of memory and running time. Furthermore, the data stream may often show concept drift, because of which adaptation of conventional algorithms becomes more challenging. One such important conventional data mining problem is that of classification. In the classification problem, we attempt to model the class variable on the basis of one or more feature variables. While this problem has been extensively studied from a conventional mining perspective, it is a much more challenging problem in the data stream domain. In this chapter, we will re-visit the problem of classification from the data stream perspective. The techniques for this problem need to be thoroughly re-designed to address the issue of resource constraints and concept drift. This chapter reviews the state-of-the-art techniques in the literature along with their corresponding advantages and disadvantages.",
"title": ""
},
{
"docid": "887c8924466bae888efa5c7c4cbef594",
"text": "UNLABELLED\nThe importance of movement is often overlooked because it is such a natural part of human life. It is, however, crucial for a child's physical, cognitive and social development. In addition, experiences support learning and development of fundamental movement skills. The foundations of those skills are laid in early childhood and essential to encourage a physically active lifestyle. Fundamental movement skill performance can be examined with several assessment tools. The choice of a test will depend on the context in which the assessment is planned. This article compares seven assessment tools which are often referred to in European or international context. It discusses the tools' usefulness for the assessment of movement skill development in general population samples. After a brief description of each assessment tool the article focuses on contents, reliability, validity and normative data. A conclusion outline of strengths and weaknesses of all reviewed assessment tools focusing on their use in educational research settings is provided and stresses the importance of regular data collection of fundamental movement skill development among preschool children. Key pointsThis review discusses seven movement skill assessment tool's test content, reliability, validity and normative samples.The seven assessment tools all showed to be of great value. Strengths and weaknesses indicate that test choice will depend on specific purpose of test use.Further data collection should also include larger data samples of able bodied preschool children.Admitting PE specialists in assessment of fundamental movement skill performance among preschool children is recommended.The assessment tool's normative data samples would benefit from frequent movement skill performance follow-up of today's children.\n\n\nABBREVIATIONS\nMOT 4-6: Motoriktest fur vier- bis sechsjährige Kinder, M-ABC: Movement Assessment Battery for Children, PDMS: Peabody Development Scales, KTK: Körper-Koordinationtest für Kinder, TGDM: Test of Gross Motor Development, MMT: Maastrichtse Motoriektest, BOTMP: Bruininks-Oseretsky Test of Motor Proficiency. ICC: intraclass correlation coefficient, NR: not reported, GM: gross motor, LV: long version, SV: short version, LF: long form, SF: short form, STV: subtest version, SEMs: standard errors of measurement, TMQ: Total Motor Quotient, TMC: Total Motor Composite, CSSA: Comprehensive Scales of Student Abilities MSEL: Mullen Scales of Early learning: AGS Edition AUC: Areas under curve BC: Battery composite ROC: Receiver operating characteristic.",
"title": ""
},
{
"docid": "d67ee0219625f02ff7023e4d0d39e8d8",
"text": "In information retrieval, pseudo-relevance feedback (PRF) refers to a strategy for updating the query model using the top retrieved documents. PRF has been proven to be highly effective in improving the retrieval performance. In this paper, we look at the PRF task as a recommendation problem: the goal is to recommend a number of terms for a given query along with weights, such that the final weights of terms in the updated query model better reflect the terms' contributions in the query. To do so, we propose RFMF, a PRF framework based on matrix factorization which is a state-of-the-art technique in collaborative recommender systems. Our purpose is to predict the weight of terms that have not appeared in the query and matrix factorization techniques are used to predict these weights. In RFMF, we first create a matrix whose elements are computed using a weight function that shows how much a term discriminates the query or the top retrieved documents from the collection. Then, we re-estimate the created matrix using a matrix factorization technique. Finally, the query model is updated using the re-estimated matrix. RFMF is a general framework that can be employed with any retrieval model. In this paper, we implement this framework for two widely used document retrieval frameworks: language modeling and the vector space model. Extensive experiments over several TREC collections demonstrate that the RFMF framework significantly outperforms competitive baselines. These results indicate the potential of using other recommendation techniques in this task.",
"title": ""
},
{
"docid": "7a9387636f01bb462aef2d3b32627c67",
"text": "The Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), a fleet of quadrotor helicopters, has been developed as a testbed for novel algorithms that enable autonomous operation of aerial vehicles. This paper develops an autonomous vehicle trajectory tracking algorithm through cluttered environments for the STARMAC platform. A system relying on a single optimization must trade off the complexity of the planned path with the rate of update of the control input. In this paper, a trajectory tracking controller for quadrotor helicopters is developed to decouple the two problems. By accepting as inputs a path of waypoints and desired velocities, the control input can be updated frequently to accurately track the desired path, while the path planning occurs as a separate process on a slower timescale. To enable the use of planning algorithms that do not consider dynamic feasibility or provide feedforward inputs, a computationally efficient algorithm using space-indexed waypoints is presented to modify the speed profile of input paths to guarantee feasibility of the planned trajectory and minimum time traversal of the planned. The algorithm is an efficient alternative to formulating a nonlinear optimization or mixed integer program. Both indoor and outdoor flight test results are presented for path tracking on the STARMAC vehicles.",
"title": ""
},
{
"docid": "7805c8f8d951a38c82ab33728f2083f1",
"text": "There has been increasing interest in adopting BlockChain (BC), that underpins the crypto-currency Bitcoin, in Internet of Things (IoT) for security and privacy. However, BCs are computationally expensive and involve high bandwidth overhead and delays, which are not suitable for most IoT devices. This paper proposes a lightweight BC-based architecture for IoT that virtually eliminates the overheads of classic BC, while maintaining most of its security and privacy benefits. IoT devices benefit from a private immutable ledger, that acts similar to BC but is managed centrally, to optimize energy consumption. High resource devices create an overlay network to implement a publicly accessible distributed BC that ensures end-to-end security and privacy. The proposed architecture uses distributed trust to reduce the block validation processing time. We explore our approach in a smart home setting as a representative case study for broader IoT applications. Qualitative evaluation of the architecture under common threat models highlights its effectiveness in providing security and privacy for IoT applications. Simulations demonstrate that our method decreases packet and processing overhead significantly compared to the BC implementation used in Bitcoin.",
"title": ""
}
] |
scidocsrr
|
85dfc5b25224e2811a1afebea976035b
|
STOCK MARKET FORECASTING TECHNIQUES : LITERATURE SURVEY
|
[
{
"docid": "dcdc8c237961aa063f8fb307f2e1697b",
"text": "We collected data from Twitter posts about firms in the S&P 500 and analyzed their cumulative emotional valence (i.e., whether the posts contained an overall positive or negative emotional sentiment). We compared this to the average daily stock market returns of firms in the S&P 500. Our results show that the cumulative emotional valence (positive or negative) of Twitter tweets about a specific firm was significantly related to that firm's stock returns. The emotional valence of tweets from users with many followers (more than the median) had a stronger impact on same day returns, as emotion was quickly disseminated and incorporated into stock prices. In contrast, the emotional valence of tweets from users with few followers had a stronger impact on future stock returns (10-day returns).",
"title": ""
},
{
"docid": "0e1547d9724e305fe58f0365a3a1f176",
"text": "There is a growing interest in mining opinions using sentiment analysis methods from sources such as news, blogs and product reviews. Most of these methods have been developed for English and are difficult to generalize to other languages. We explore an approach utilizing state-of-the-art machine translation technology and perform sentiment analysis on the English translation of a foreign language text. Our experiments indicate that (a) entity sentiment scores obtained by our method are statistically significantly correlated across nine languages of news sources and five languages of a parallel corpus; (b) the quality of our sentiment analysis method is largely translator independent; (c) after applying certain normalization techniques, our entity sentiment scores can be used to perform meaningful cross-cultural comparisons. Introduction There is considerable and rapidly-growing interest in using sentiment analysis methods to mine opinion from news and blogs (Yi et al. 2003; Pang, Lee, & Vaithyanathan 2002; Pang & Lee 2004; Wiebe 2000; Yi & Niblack 2005). Applications include product reviews, market research, public relations, and financial modeling. Almost all existing sentiment analysis systems are designed to work in a single language, usually English. But effectively mining international sentiment requires text analysis in a variety of local languages. Although in principle sentiment analysis systems specific to each language can be built, such approaches are inherently labor intensive and complicated by the lack of linguistic resources comparable to WordNet for many languages. An attractive alternative to this approach uses existing translation programs and simply translates source documents to English before passing them to a sentiment analysis system. The primary difficulty here concerns the loss of nuance incurred during the translation process. Even state-ofthe-art language translation programs fail to translate substantial amounts of text, make serious errors on what they do translate, and reduce well-formed texts to sentence fragments. Still, we believe that translated texts are sufficient to accurately capture sentiment, particularly in sentiment analyCopyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. sis systems (such as ours) which aggregate sentiment from multiple documents. In particular, we have generalized the Lydia sentiment analysis system to monitor international opinion on a country-by-country basis by aggregating daily news data from roughly 200 international English-language papers and over 400 sources partitioned among eight other languages. Maps illustrating the results of our analysis are shown in Figure 1. From these maps we see that George Bush is mentioned the most positively in newspapers from Australia, France and Germany, and negatively in most other sources. Vladimir Putin, on the other hand, has positive sentiment in most countries, except Canada and Bolivia. Additional examples of such analysis appear on our website, www.textmap.com. Such maps are interesting to study and quite provocative, but beg the question of how meaningful the results are. Here we provide a rigorous and careful analysis of the extent to which sentiment survives the brutal process of automatic translation. Our assessment is complicated by the lack of a “gold standard” for international news sentiment. Instead, we rely on measuring theconsistencyof sentiment scores for given entities across different language sources. Previous work (Godbole, Srinivasaiah, & Skiena 2007) has demonstrated that the Lydia sentiment analysis system accurately captures notions of sentiment in English. The degree to which these judgments correlate with opinions originating from related foreign-language sources will either validate or reject our translation approach to sentiment analysis. In this paper we provide: • Cross-language analysis across news streams – We demonstrate that statistically significant entity sentiment analysis can be performed using as little as ten days of newspapers for each of the eight foreign languages we studied (Arabic, Chinese, French, German, Italian, Japanese, Korean, and Spanish). • Cross-language analysis across parallel corpora – Some of difference in observed entity sentiment across news sources reflect the effects of differing content and opinion instead of interpretation error. To isolate the effects of news source variance, we performed translation analysis of a parallel corpus of European Union law. As expected, these show greater entity frequency conservation",
"title": ""
}
] |
[
{
"docid": "266114ecdd54ce1c5d5d0ec42c04ed4d",
"text": "A multiscale image registration technique is presented for the registration of medical images that contain significant levels of noise. An overview of the medical image registration problem is presented, and various registration techniques are discussed. Experiments using mean squares, normalized correlation, and mutual information optimal linear registration are presented that determine the noise levels at which registration using these techniques fails. Further experiments in which classical denoising algorithms are applied prior to registration are presented, and it is shown that registration fails in this case for significantly high levels of noise, as well. The hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [20] is presented, and accurate registration of noisy images is achieved by obtaining a hierarchical multiscale decomposition of the images and registering the resulting components. This approach enables successful registration of images that contain noise levels well beyond the level at which ordinary optimal linear registration fails. Image registration experiments demonstrate the accuracy and efficiency of the multiscale registration technique, and for all noise levels, the multiscale technique is as accurate as or more accurate than ordinary registration techniques.",
"title": ""
},
{
"docid": "ab6aec311ec139d72e6cecc6f2ce674e",
"text": "Agriculture is the most significant application area particularly in the developing countries like India. Use of information technology in agriculture can change the situation of decision making and farmers can yield in a better way. Data mining plays a crucial role for decision making on several issues related to agriculture field. This paper discussed about the role of data mining in perspective of agriculture field and also confers about several data mining techniques and their related work by several authors in context to agriculture domain. It also discusses on different data mining applications in solving the different agricultural problems. It integrates the work of various authors in one place so it is useful for researchers to get information of current scenario of data mining techniques and applications in context to agriculture field. This paper provides a survey of various data mining techniques used in agriculture which includes Artificial Neural Networks, K nearest neighbor, Decision tree, Bayesion network, Fuzzy set, Support Vector Machine and K – means[1].",
"title": ""
},
{
"docid": "c2b6708a14988e3af68ae9a6d55d8095",
"text": "Background: The Big Five are seen as stable personality traits. This study hypothesized that their measurement via self-ratings is differentially biased by participants’ emotions. The relationship between habitual emotions and personality should be mirrored in a patterned influence of emotional states upon personality scores. Methods: We experimentally induced emotional states and compared baseline Big Five scores of ninety-eight German participants (67 female; mean age 22.2) to their scores after the induction of happiness or sadness. Manipulation checks included the induced emotion’s intensity and durability. Results: The expected differential effect could be detected for neuroticism and extraversion and as a trend for agreeableness. Post-hoc analyses showed that only sadness led to increased neuroticism and decreased extraversion scores. Oppositely, happiness did not decrease neuroticism, but there was a trend for an elevation on extraversion scores. Conclusion: Results suggest a specific effect of sadness on self-reported personality traits, particularly on neuroticism. Sadness may trigger different self-concepts in susceptible people, biasing perceived personality. This bias could be minimised by tracking participants’ emotional states prior to personality measurement.",
"title": ""
},
{
"docid": "c78ef06693d0b8ae37989b5574938c90",
"text": "Relational databases have been around for many decades and are the database technology of choice for most traditional data-intensive storage and retrieval applications. Retrievals are usually accomplished using SQL, a declarative query language. Relational database systems are generally efficient unless the data contains many relationships requiring joins of large tables. Recently there has been much interest in data stores that do not use SQL exclusively, the so-called NoSQL movement. Examples are Google's BigTable and Facebook's Cassandra. This paper reports on a comparison of one such NoSQL graph database called Neo4j with a common relational database system, MySQL, for use as the underlying technology in the development of a software system to record and query data provenance information.",
"title": ""
},
{
"docid": "a18101c91c36ab1ac58bf0747747796e",
"text": "Action video game play benefits performance in an array of sensory, perceptual, and attentional tasks that go well beyond the specifics of game play [1-9]. That a training regimen may induce improvements in so many different skills is notable because the majority of studies on training-induced learning report improvements on the trained task but limited transfer to other, even closely related, tasks ([10], but see also [11-13]). Here we ask whether improved probabilistic inference may explain such broad transfer. By using a visual perceptual decision making task [14, 15], the present study shows for the first time that action video game experience does indeed improve probabilistic inference. A neural model of this task [16] establishes how changing a single parameter, namely the strength of the connections between the neural layer providing the momentary evidence and the layer integrating the evidence over time, captures improvements in action-gamers behavior. These results were established in a visual, but also in a novel auditory, task, indicating generalization across modalities. Thus, improved probabilistic inference provides a general mechanism for why action video game playing enhances performance in a wide variety of tasks. In addition, this mechanism may serve as a signature of training regimens that are likely to produce transfer of learning.",
"title": ""
},
{
"docid": "94d4dd3c1b47b10a65d0c98434d495d4",
"text": "Comparisons of chromosome X and the autosomes can illuminate differences in the histories of males and females as well as shed light on the forces of natural selection. We compared the patterns of variation in these parts of the genome using two datasets that we assembled for this study that are both genomic in scale. Three independent analyses show that around the time of the dispersal of modern humans out of Africa, chromosome X experienced much more genetic drift than is expected from the pattern on the autosomes. This is not predicted by known episodes of demographic history, and we found no similar patterns associated with the dispersals into East Asia and Europe. We conclude that a sex-biased process that reduced the female effective population size, or an episode of natural selection unusually affecting chromosome X, was associated with the founding of non-African populations.",
"title": ""
},
{
"docid": "ca1eb1dc93f420ba4ca88caca10b7c62",
"text": "BACKGROUND\nThe purpose of this study was to describe and evaluate the outcomes of breast reduction in cases of gigantomastia using a posterosuperior pedicle.\n\n\nMETHODS\nFour hundred thirty-one breast reductions were performed between 2004 and 2007. Fifty patients of 431 (11.6 percent) responded to the inclusion criteria (>1000 g of tissue removed per breast (100 breasts). The mean age was 33.2 years (range, 17 to 58 years). The average notch-to-nipple distance was 37.9 cm (range, 35 to 46 cm). The mean body mass index was 27 (range, 22 to 35 cm). The technique of the posterosuperior pedicle was used, in which the perforators from fourth anterior intercostal arteries are preserved (posterior pedicle). Results were evaluated by means of self-evaluation at 1 year postoperatively.\n\n\nRESULTS\nThe average weight resected was 1231 g (range, 1000 to 2500 g). The length of hospital stay was 2.3 days (range 2 to 4 days). Thirty seven patients evaluated their results as \"very good\" (74 percent), nine as \"good\" (18 percent), and four as \"acceptable\" (8 percent). There were no \"poor\" results. The chief complaint was insufficient breast reduction (four patients), despite the considerable improvement in their daily life (8 percent). Back pain totally resolved in 46 percent and partially (with significant improvement) in 54 percent of cases. One major and seven minor complications were recorded.\n\n\nCONCLUSIONS\nThe posterosuperior pedicle for breast reduction is a reproducible and versatile technique. The preservation of the anterior intercostal artery perforators enhances the reliability of the vascular supply to the superior pedicle.",
"title": ""
},
{
"docid": "323c9caac8b04b1531071acf74eb189b",
"text": "Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course and compared questions generated by G-Asks with human generated questions. The results indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors’ questions and discussed how the human supervisors generate such questions from the source text. General Terms: Automatic Question Generation, Natural Language Processing, Academic Writing Support",
"title": ""
},
{
"docid": "ccbb7e753b974951bb658b63e91431bb",
"text": "In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",
"title": ""
},
{
"docid": "cd8efcf02f3a84b6cf02f72ba85de323",
"text": "There are a variety of grand challenges for text extraction in scene videos by robots and users, e.g., heterogeneous background, varied text, nonuniform illumination, arbitrary motion and poor contrast. Most previous video text detection methods are investigated with local information, i.e., within individual frames, with limited performance. In this paper, we propose a unified tracking based text detection system by learning locally and globally, which uniformly integrates detection, tracking, recognition and their interactions. In this system, scene text is first detected locally in individual frames. Second, an optimal tracking trajectory is learned and linked globally with all detection, recognition and prediction information by dynamic programming. With the tracking trajectory, final detection and tracking results are simultaneously and immediately obtained. Moreover, our proposed techniques are extensively evaluated on several public scene video text databases, and are much better than the state-of-the-art methods.",
"title": ""
},
{
"docid": "e8b91dfdf622b690a3ab6c981999c370",
"text": "In this paper, the feasibility of Substrate Integrated Waveguide (SIW) couplers, fabricated using single-layer TACONIC RF-35 dielectric substrate is investigated. The couplers have been produced employing a standard PCB process. The choice of the TACONIC RF-35 substrate as alternative to other conventional materials is motivated by its lower cost and high dielectric constant, allowing the reduction of the device size. The coupler requirements are 90-degree phase shift between the output and the coupled ports and frequency bandwidth from about 10.5 GHz to 12.5 GHz. The design and optimization of the couplers have been performed by using the software CST Microwave Studio c ©. Eight different coupler configurations have been designed and compared. The better three couplers have been fabricated and characterized. The proposed SIW directional couplers could be integrated within more complex planar circuits or utilized as stand-alone devices, because of their compact size. They exhibit good performance and could be employed in communication applications as broadcast signal distribution and as key elements for the construction of other microwave devices and systems.",
"title": ""
},
{
"docid": "71808a5c5bb2383ce510a850362651ce",
"text": "Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely.1 We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994). In continual learning we assume that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution (see Appendix A for details). We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Unfortunately, neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network We consider task agnostic future gradients, referring to gradients of the model parameters with respect to unseen data points. These can be drawn from tasks that have already been partially learned or unseen tasks.",
"title": ""
},
{
"docid": "78ba417cf2cb6a809414feefe163b710",
"text": "The product bundling problem is a challenging task in the e-Commerce domain. We propose a generative engine in order to find the bundle of products that best satisfies user requirements and, at the same time, seller needs such as the minimization of the dead stocks and the maximization of net income. The proposed system named Intelligent Bundle Suggestion and Generation (IBSAG) is designed in order to satisfy these requirements. Market Basket Analysis supports the system in user requirement elicitation task. Experimental results prove the ability of system in finding the optimal tradeoff between different and conflicting constraints.",
"title": ""
},
{
"docid": "0de38657b70acdaead3226d6ebd2f7ff",
"text": "We present the results of a parametric study devised to allow us to optimally design a patch fed planar dielectric slab waveguide extended hemi-elliptical lens antenna. The lens antenna, 11lambda times 13lambda in the lens plane and 0.6lambda thick, constructed from polystyrene and weighing only 90 g is fabricated and characterized at 28.5 GHz for both single and multiple operating configurations. The lens when optimized for single beam operation achieves 18.5 dB measured gain (85% aperture efficiency), 40deg and 4.1deg half power beam width for E plane and H plane respectively and 10% impedance bandwidth for -10 dB return loss. While for optimized for multiple beam operation it is shown that the lens can accommodate up to 9 feeds and that beam symmetry can be maintained over a scan angle of 27deg with a gain of 14.9 to 17.7 dB, and first side lobe levels of -11 to -7 dB respectively. Over the frequency range 26 to 30 GHz the lens maintains a worst case return loss of -10 dB and port to port feed isolation of better than -25 dB. Further it is shown that residual leaked energy from the structure is less than -48 dBm at 1 cm, thus making a low profile enclosure possible. We also show that by simultaneous excitation of two adjacent ports we can obtain difference patterns with null depths of up to -36 dB.",
"title": ""
},
{
"docid": "e141a1c5c221aa97db98534b339694cb",
"text": "Despite the tremendous popularity and great potential, the field of Enterprise Resource Planning (ERP) adoption and implementation is littered with remarkable failures. Though many contributing factors have been cited in the literature, we argue that the integrated nature of ERP systems, which generally requires an organization to adopt standardized business processes reflected in the design of the software, is a key factor contributing to these failures. We submit that the integration and standardization imposed by most ERP systems may not be suitable for all types of organizations and thus the ‘‘fit’’ between the characteristics of the adopting organization and the standardized business process designs embedded in the adopted ERP system affects the likelihood of implementation success or failure. In this paper, we use the structural contingency theory to identify a set of dimensions of organizational structure and ERP system characteristics that can be used to gauge the degree of fit, thus providing some insights into successful ERP implementations. Propositions are developed based on analyses regarding the success of ERP implementations in different types of organizations. These propositions also provide directions for future research that might lead to prescriptive guidelines for managers of organizations contemplating implementing ERP systems. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8fdd94eea9b888f3c936c69598d2ad2",
"text": "To reduce the high failure rate of software projects, managers need better tools to assess and manage software project risk. In order to create such tools, however, information systems researchers must first develop a better understanding of the dimensions of software project risk and how they can affect project performance. Progress in this area has been hindered by: (1) a lack of validated instruments for measuring software project risk that tap into the dimensions of risk that are seen as important by software project managers, and (2) a lack of theory to explain the linkages between various dimensions of software project risk and project performance. In this study, six dimensions of software project risk were identified and reliable and valid measures were developed for each. Guided by sociotechnical systems theory, an exploratory model was developed and tested. The results show that social subsystem risk influences technical subsystem risk, which, in turn, influences the level of project management risk, and ultimately, project performance. The implications of these findings for research and practice are discussed. Subject Areas: Sociotechnical Systems Theory, Software Project Risk, and Structural Equation Modeling. ∗The authors would like to thank the Project Management Institute’s Information Systems Special Interest Group (PMI-ISSIG) for supporting this research. We would also like to thank Georgia State University for their financial support through the PhD research grant program. The authors gratefully acknowledge Al Segars and Ed Rigdon for their insightful comments and assistance at various stages of this project. †Corresponding author.",
"title": ""
},
{
"docid": "7ced14fb638a63042d405f4ad6f65a4d",
"text": "We present <italic>smart drill-down</italic>, an operator for interactively exploring a relational table to discover and summarize “interesting” groups of tuples. Each group of tuples is described by a <italic>rule</italic> . For instance, the rule <inline-formula><tex-math notation=\"LaTeX\">$(a, b, \\star, 1000)$</tex-math><alternatives> <inline-graphic xlink:href=\"joglekar-ieq1-2685998.gif\"/></alternatives></inline-formula> tells us that there are 1,000 tuples with value <inline-formula><tex-math notation=\"LaTeX\">$a$</tex-math><alternatives> <inline-graphic xlink:href=\"joglekar-ieq2-2685998.gif\"/></alternatives></inline-formula> in the first column and <inline-formula><tex-math notation=\"LaTeX\">$b$</tex-math><alternatives> <inline-graphic xlink:href=\"joglekar-ieq3-2685998.gif\"/></alternatives></inline-formula> in the second column (and any value in the third column). Smart drill-down presents an analyst with a list of rules that together describe interesting aspects of the table. The analyst can tailor the definition of interesting, and can interactively apply smart drill-down on an existing rule to explore that part of the table. We demonstrate that the underlying optimization problems are <sc>NP-Hard</sc>, and describe an algorithm for finding the approximately optimal list of rules to display when the user uses a smart drill-down, and a dynamic sampling scheme for efficiently interacting with large tables. Finally, we perform experiments on real datasets on our experimental prototype to demonstrate the usefulness of smart drill-down and study the performance of our algorithms.",
"title": ""
},
{
"docid": "b397d82e24f527148cb46fbabda2b323",
"text": "This paper describes Illinois corn yield estimation using deep learning and another machine learning, SVR. Deep learning is a technique that has been attracting attention in recent years of machine learning, it is possible to implement using the Caffe. High accuracy estimation of crop yield is very important from the viewpoint of food security. However, since every country prepare data inhomogeneously, the implementation of the crop model in all regions is difficult. Deep learning is possible to extract important features for estimating the object from the input data, so it can be expected to reduce dependency of input data. The network model of two InnerProductLayer was the best algorithm in this study, achieving RMSE of 6.298 (standard value). This study highlights the advantages of deep learning for agricultural yield estimating.",
"title": ""
},
{
"docid": "fbb541bf964e1290854e4b5fda469225",
"text": "The combination of tendon driven robotic fingers and variable impedance actuation in the DLR hand arm system brings benefits in robustness and dynamics by enabling energy storage. Since the force measurement and motors are in the forearm the tendon path should have low friction for accurate movements and precise finger control. In this paper an enhanced generation of the Awiwi hand finger design is presented. It reduces the friction in the actuation system about 20 percent and increases the maximum fingertip force about 33 percent. A test finger was designed to evaluate different tendon couplings and to test a magnetic sensor to measure the joint position. In a next step a new finger design for DLR hand arm system has been developed. Finally, the low friction and the robustness are proven using several experiments.",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
}
] |
scidocsrr
|
73cab38f1b2440ac4c8f02e75b9f27c3
|
A similarity-based approach for test case prioritization using historical failure data
|
[
{
"docid": "ffcc5b512d780dc13562f450e21e67de",
"text": "Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http://defects4j.org.",
"title": ""
},
{
"docid": "18775f382c9daa44a59875ec1257c439",
"text": "Research on software testing produces many innovative automated techniques, but because software testing is by necessity incomplete and approximate, any new technique faces the challenge of an empirical assessment. In the past, we have demonstrated scientific advance in automated unit test generation with the EVOSUITE tool by evaluating it on manually selected open-source projects or examples that represent a particular problem addressed by the underlying technique. However, demonstrating scientific advance is not necessarily the same as demonstrating practical value; even if VOSUITE worked well on the software projects we selected for evaluation, it might not scale up to the complexity of real systems. Ideally, one would use large “real-world” software systems to minimize the threats to external validity when evaluating research tools. However, neither choosing such software systems nor applying research prototypes to them are trivial tasks.\n In this article we present the results of a large experiment in unit test generation using the VOSUITE tool on 100 randomly chosen open-source projects, the 10 most popular open-source projects according to the SourceForge Web site, seven industrial projects, and 11 automatically generated software projects. The study confirms that VOSUITE can achieve good levels of branch coverage (on average, 71% per class) in practice. However, the study also exemplifies how the choice of software systems for an empirical study can influence the results of the experiments, which can serve to inform researchers to make more conscious choices in the selection of software system subjects. Furthermore, our experiments demonstrate how practical limitations interfere with scientific advances, branch coverage on an unbiased sample is affected by predominant environmental dependencies. The surprisingly large effect of such practical engineering problems in unit testing will hopefully lead to a larger appreciation of work in this area, thus supporting transfer of knowledge from software testing research to practice.",
"title": ""
}
] |
[
{
"docid": "6b9d5cbdf91d792d60621da0bb45a303",
"text": "AR systems pose potential security concerns that should be addressed before the systems become widespread.",
"title": ""
},
{
"docid": "ebf832ceef29b9e0b27e916bb67e69f3",
"text": "Virtual reality-based therapy is one of the most innovative and promising recent developments in rehabilitation technology. Virtual reality is the use of interactive simulations created with computer hardware and software to present users with opportunities to engage in environments that appear to be and feel similar to real world objects and events. Wii-Fit is considered as one of the virtual reality-based therapy. Children with Down syndrome have lower scores on balance and agility tasks than do children with other mental impairments. The purpose of this study was to examine the effect of Wii-Fit on balance in children with Down syndrome. Balance was measured by Bruininks-Oseretsky Test of Motor Proficiency for 30 children with Down syndrome. The subjects were randomly divided into two groups of equal size (control and study). They ranged in age from 10 to 13 years old and they were selected from both genders. The control group received the traditional physical therapy program. A program of three Wii-Fit games was conducted for the study group in addition to the traditional physical therapy program. The program for both groups continued for six weeks. The results revealed high significant improvement of balance in the study group (p= 0.000) when compared with that of the control group indicating that Wii-Fit games as a virtual reality-based therapy could improve balance for children with Down syndrome.",
"title": ""
},
{
"docid": "3d56f88bf8053258a12e609129237b19",
"text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.",
"title": ""
},
{
"docid": "52115901d15b2c0d75748ac6f4cf2851",
"text": "This paper presents the development of the CYBERLEGs Alpha-Prototype prosthesis, a new transfemoral prosthesis incorporating a new variable stiffness ankle actuator based on the MACCEPA architecture, a passive knee with two locking mechanisms, and an energy transfer mechanism that harvests negative work from the knee and delivers it to the ankle to assist pushoff. The CYBERLEGs Alpha-Prosthesis is part of the CYBERLEGs FP7-ICT project, which combines a prosthesis system to replace a lost limb in parallel with an exoskeleton to assist the sound leg, and sensory array to control both systems. The prosthesis attempts to produce a natural level ground walking gait that approximates the joint torques and kinematics of a non-amputee while maintaining compliant joints, which has the potential to decrease impulsive losses, and ultimately reduce the end user energy consumption. This first prototype consists of a passive knee and an active ankle which are energetically coupled to reduce the total power consumption of the device. Here we present simulations of the actuation system of the ankle and the passive behavior of the knee module with and without the energy transfer effects, the mechanical design of the prosthesis, and empirical results from testing of the A preliminary version of this paper was presented at the Wearable Robotics Workshop, Neurotechnix 2013. ∗Corresponding author Email addresses: lflynn@vub.ac.be (Louis Flynn), jgeeroms@vub.ac.be (Joost Geeroms), rjimenez@vub.ac.be (Rene Jimenez-Fabian), bram.vanderborght@vub.ac.be (Bram Vanderborght), n.vitiello@sssup.it (Nicola Vitiello), dlefeber@vub.ac.be (Dirk Lefeber) Preprint submitted to Journal of Robotics and Autonomous Systems November 30, 2014 physical device with amputee subjects.",
"title": ""
},
{
"docid": "19359356fe18c5ca4028696c145001dd",
"text": "Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add, Kulisch accumulation and tapered encodings from Gustafson’s posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9% top-1 and 0.2% top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8/38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96× the power and 1.12× the area of 8/32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59× the power and 0.68× the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well.",
"title": ""
},
{
"docid": "5b51fb07c0c8c9317ee2c81c54ba4c60",
"text": "Aim The aim of this paper is to explore the role of values-based service for sustainable business. The two basic questions addressed are: What is ‘values-based service’? How can values create value for customers and other stakeholders? Design/ methodology/ approach This paper is based on extensive empirical studies focusing on the role of values at the corporate, country and store levels in the retail company IKEA and a comparison of the results with data from Starbucks, H&M and Body Shop. The theoretical point of departure is a business model based on the service-dominant logic (SDL) on the one hand and control through values focusing on social and environmental values forming the basis for a sustainable business. Findings Based on a comparative, inductive empirical analysis, five principles for a sustainable values-based service business were identified: (1) Strong company values drive customer value, (2) CSR as a strategy for sustainable service business, (3) Values-based service experience for co-creating value with customers, (4) Values-based service brand and communication for values resonance and (5) Values-based service leadership for living the values. A company built on an entrepreneurial business model often has the original entrepreneur’s values and leadership style as a model for future generations of leaders. However, the challenge for subsequent leaders is to develop these values and communicate what they mean today. Orginality/ value We suggest a new framework for managing values-based service to create a sustainable business based on values resonance.",
"title": ""
},
{
"docid": "121a8470fcbf121e5f4c42594c6d24fe",
"text": "Research has consistently found that school students who do not identify as self-declared completely heterosexual are at increased risk of victimization by bullying from peers. This study examined heterosexual and nonheterosexual university students' involvement in both traditional and cyber forms of bullying, as either bullies or victims. Five hundred twenty-eight first-year university students (M=19.52 years old) were surveyed about their sexual orientation and their bullying experiences over the previous 12 months. The results showed that nonheterosexual young people reported higher levels of involvement in traditional bullying, both as victims and perpetrators, in comparison to heterosexual students. In contrast, cyberbullying trends were generally found to be similar for heterosexual and nonheterosexual young people. Gender differences were also found. The implications of these results are discussed in terms of intervention and prevention of the victimization of nonheterosexual university students.",
"title": ""
},
{
"docid": "ff7c2ec1a09923262123035a72922215",
"text": "The repetitive structure of genomic DNA holds many secrets to be discovered. A systematic study of repetitive DNA on a genomic or inter-genomic scale requires extensive algorithmic support. The REPuter program described herein was designed to serve as a fundamental tool in such studies. Efficient and complete detection of various types of repeats is provided together with an evaluation of significance and interactive visualization. This article circumscribes the wide scope of repeat analysis using applications in five different areas of sequence analysis: checking fragment assemblies, searching for low copy repeats, finding unique sequences, comparing gene structures and mapping of cDNA/EST sequences.",
"title": ""
},
{
"docid": "c988dc0e9be171a5fcb555aedcdf67e3",
"text": "Online social networks, such as Facebook, are increasingly utilized by many people. These networks allow users to publish details about themselves and to connect to their friends. Some of the information revealed inside these networks is meant to be private. Yet it is possible to use learning algorithms on released data to predict private information. In this paper, we explore how to launch inference attacks using released social networking data to predict private information. We then devise three possible sanitization techniques that could be used in various situations. Then, we explore the effectiveness of these techniques and attempt to use methods of collective inference to discover sensitive attributes of the data set. We show that we can decrease the effectiveness of both local and relational classification algorithms by using the sanitization methods we described.",
"title": ""
},
{
"docid": "7579b5cb9f18e3dc296bcddc7831abc5",
"text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.",
"title": ""
},
{
"docid": "80fe141d88740955f189e8e2bf4c2d89",
"text": "Predictions concerning development, interrelations, and possible independence of working memory, inhibition, and cognitive flexibility were tested in 325 participants (roughly 30 per age from 4 to 13 years and young adults; 50% female). All were tested on the same computerized battery, designed to manipulate memory and inhibition independently and together, in steady state (single-task blocks) and during task-switching, and to be appropriate over the lifespan and for neuroimaging (fMRI). This is one of the first studies, in children or adults, to explore: (a) how memory requirements interact with spatial compatibility and (b) spatial incompatibility effects both with stimulus-specific rules (Simon task) and with higher-level, conceptual rules. Even the youngest children could hold information in mind, inhibit a dominant response, and combine those as long as the inhibition required was steady-state and the rules remained constant. Cognitive flexibility (switching between rules), even with memory demands minimized, showed a longer developmental progression, with 13-year-olds still not at adult levels. Effects elicited only in Mixed blocks with adults were found in young children even in single-task blocks; while young children could exercise inhibition in steady state it exacted a cost not seen in adults, who (unlike young children) seemed to re-set their default response when inhibition of the same tendency was required throughout a block. The costs associated with manipulations of inhibition were greater in young children while the costs associated with increasing memory demands were greater in adults. Effects seen only in RT in adults were seen primarily in accuracy in young children. Adults slowed down on difficult trials to preserve accuracy; but the youngest children were impulsive; their RT remained more constant but at an accuracy cost on difficult trials. Contrary to our predictions of independence between memory and inhibition, when matched for difficulty RT correlations between these were as high as 0.8, although accuracy correlations were less than half that. Spatial incompatibility effects and global and local switch costs were evident in children and adults, differing only in size. Other effects (e.g., asymmetric switch costs and the interaction of switching rules and switching response-sites) differed fundamentally over age.",
"title": ""
},
{
"docid": "8411019e166f3b193905099721c29945",
"text": "In this article we recast the Dahl, LuGre, and Maxwell-slip models as extended, generalized, or semilinear Duhem models. We classified each model as either rate independent or rate dependent. Smoothness properties of the three friction models were also considered. We then studied the hysteresis induced by friction in a single-degree-of-freedom system. The resulting system was modeled as a linear system with Duhem feedback. For each friction model, we computed the corresponding hysteresis map. Next, we developed a DC servo motor testbed and performed motion experiments. We then modeled the testbed dynamics and simulated the system using all three friction models. By comparing the simulated and experimental results, it was found that the LuGre model provides the best model of the gearbox friction characteristics. A manual tuning approach was used to determine parameters that model the friction in the DC motor.",
"title": ""
},
{
"docid": "6e9064fa15335f3f9013533b8770d297",
"text": "The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction) and their strengths within a more general framework of early, intermediate, and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art.",
"title": ""
},
{
"docid": "6025fb8936761dcf3c6751545b430ec0",
"text": "Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words used in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first formulates the problem as a PU learning problem. It then proposes a new PU learning method suitable for the problem based on a neural network. The results are further enhanced with a new dictionary lookup technique and a novel polarity classification algorithm. Experimental results show that the proposed approach greatly outperforms baseline methods.",
"title": ""
},
{
"docid": "3456735d14694b3769621646b2422d19",
"text": "a 0.4-pm digit l CMOS technology entails many difficulties This paper describes the design of a CMOS frequency synthesizer targeting wireless local area network applications in the 5-GHz range. Based on an integer-N architecture, the synthesizer produces a 5.2-GHz output as well as the quadrature phases of a 2.6-GHz carrier. Fabricated in a 0.4-pm digital CMOS technology, the circuit provides a channel spacing of 23 MHz at 5.2 GHz while exhibiting a phase noise of -115 dBdHz at 2.6 GHz and -100 dBdHz at 5.2 GHz at 10-MHz offset. The reference sidebands are at -50 dBc at 2.6 GHz and the power dissipation from a 2.6-V supply is 47 mW. at both the architecture and the circuit level. The high center frequency of the voltage-controlled oscillator (VCO), the poor quality of inductors due to skin effect and substrate loss, the limited tuning range, the nonlinearity of the VCO inputloutput characteristic, the high speed required of the dual-modulus divider, the mismatches in the charge pump, and the implementation of the loop filter are among the issues encountered in this design. In order to relax some of the synthesizer requirements, the transceiver and the synthesizer have been designed concurrently. Fig. 1 shows the transceiver architecture [2] and",
"title": ""
},
{
"docid": "8482429f70e50b514960fca81db25ff7",
"text": "Stem cells capable of differentiating to multiple lineages may be valuable for therapy. We report the isolation of human and rodent amniotic fluid–derived stem (AFS) cells that express embryonic and adult stem cell markers. Undifferentiated AFS cells expand extensively without feeders, double in 36 h and are not tumorigenic. Lines maintained for over 250 population doublings retained long telomeres and a normal karyotype. AFS cells are broadly multipotent. Clonal human lines verified by retroviral marking were induced to differentiate into cell types representing each embryonic germ layer, including cells of adipogenic, osteogenic, myogenic, endothelial, neuronal and hepatic lineages. Examples of differentiated cells derived from human AFS cells and displaying specialized functions include neuronal lineage cells secreting the neurotransmitter L-glutamate or expressing G-protein-gated inwardly rectifying potassium channels, hepatic lineage cells producing urea, and osteogenic lineage cells forming tissue-engineered bone.",
"title": ""
},
{
"docid": "64c9153ff7e75f7a4b0ded6c82cbe3af",
"text": "As with many Indigenous groups around the world, Aboriginal communities in Canada face significant challenges with trauma and substance use. The complexity of symptoms that accompany intergenerational trauma and substance use disorders represents major challenges in the treatment of both disorders. There appears to be an underutilization of substance use and mental health services, substantial client dropout rates, and an increase in HIV infections in Aboriginal communities in Canada. The aim of this paper is to explore and evaluate current literature on how traditional Aboriginal healing methods and the Western treatment model \"Seeking Safety\" could be blended to help Aboriginal peoples heal from intergenerational trauma and substance use disorders. A literature search was conducted using the keywords: intergenerational trauma, historical trauma, Seeking Safety, substance use, Two-Eyed Seeing, Aboriginal spirituality, and Aboriginal traditional healing. Through a literature review of Indigenous knowledge, most Indigenous scholars proposed that the wellness of an Aboriginal community can only be adequately measured from within an Indigenous knowledge framework that is holistic, inclusive, and respectful of the balance between the spiritual, emotional, physical, and social realms of life. Their findings indicate that treatment interventions must honour the historical context and history of Indigenous peoples. Furthermore, there appears to be strong evidence that strengthening cultural identity, community integration, and political empowerment can enhance and improve mental health and substance use disorders in Aboriginal populations. In addition, Seeking Safety was highlighted as a well-studied model with most populations, resulting in healing. The provided recommendations seek to improve the treatment and healing of Aboriginal peoples presenting with intergenerational trauma and addiction. Other recommendations include the input of qualitative and quantitative research as well as studies encouraging Aboriginal peoples to explore treatments that could specifically enhance health in their respective communities.",
"title": ""
},
{
"docid": "37fce1406c54de9a31efe0c9e836cab5",
"text": "The field of the neurobiology of language is experiencing a paradigm shift in which the predominant Broca-Wernicke-Geschwind language model is being revised in favor of models that acknowledge that language is processed within a distributed cortical and subcortical system. While it is important to identify the brain regions that are part of this system, it is equally important to establish the anatomical connectivity supporting their functional interactions. The most promising framework moving forward is one in which language is processed via two interacting \"streams\"--a dorsal and ventral stream--anchored by long association fiber pathways, namely the superior longitudinal fasciculus/arcuate fasciculus, uncinate fasciculus, inferior longitudinal fasciculus, inferior fronto-occipital fasciculus, and two less well-established pathways, the middle longitudinal fasciculus and extreme capsule. In this article, we review the most up-to-date literature on the anatomical connectivity and function of these pathways. We also review and emphasize the importance of the often overlooked cortico-subcortical connectivity for speech via the \"motor stream\" and associated fiber systems, including a recently identified cortical association tract, the frontal aslant tract. These pathways anchor the distributed cortical and subcortical systems that implement speech and language in the human brain.",
"title": ""
},
{
"docid": "70a335baaabc266a3c6f33ab24d63e2f",
"text": "Mental illnesses are serious problems that places a burden on individuals, their families and on society in general. Although their symptoms have been known for several years, accurate and quick diagnoses remain a challenge. Inaccurate or delayed diagnoses results in increased frequency and severity of mood episodes, and reduces the benefits of treatment. In this survey paper, we review papers that leverage data from social media and design predictive models. These models utilize patterns of speech and life features of various subjects to determine the onset period of bipolar disorder. This is done by studying the patients, their behaviour, moods and sleeping patterns, and then effectively mapping these features to detect whether they are currently in a prodromal phase before a mood episode or not.",
"title": ""
},
{
"docid": "a87ba6d076c3c05578a6f6d9da22ac79",
"text": "Here we review and extend a new unitary model for the pathophysiology of involutional osteoporosis that identifies estrogen (E) as the key hormone for maintaining bone mass and E deficiency as the major cause of age-related bone loss in both sexes. Also, both E and testosterone (T) are key regulators of skeletal growth and maturation, and E, together with GH and IGF-I, initiate a 3- to 4-yr pubertal growth spurt that doubles skeletal mass. Although E is required for the attainment of maximal peak bone mass in both sexes, the additional action of T on stimulating periosteal apposition accounts for the larger size and thicker cortices of the adult male skeleton. Aging women undergo two phases of bone loss, whereas aging men undergo only one. In women, the menopause initiates an accelerated phase of predominantly cancellous bone loss that declines rapidly over 4-8 yr to become asymptotic with a subsequent slow phase that continues indefinitely. The accelerated phase results from the loss of the direct restraining effects of E on bone turnover, an action mediated by E receptors in both osteoblasts and osteoclasts. In the ensuing slow phase, the rate of cancellous bone loss is reduced, but the rate of cortical bone loss is unchanged or increased. This phase is mediated largely by secondary hyperparathyroidism that results from the loss of E actions on extraskeletal calcium metabolism. The resultant external calcium losses increase the level of dietary calcium intake that is required to maintain bone balance. Impaired osteoblast function due to E deficiency, aging, or both also contributes to the slow phase of bone loss. Although both serum bioavailable (Bio) E and Bio T decline in aging men, Bio E is the major predictor of their bone loss. Thus, both sex steroids are important for developing peak bone mass, but E deficiency is the major determinant of age-related bone loss in both sexes.",
"title": ""
}
] |
scidocsrr
|
0e36a0e729123ac5b6f243e58091a252
|
Promoting Positive Technological Development in a Kindergarten Makerspace : A Qualitative Case Study
|
[
{
"docid": "332bd650f555931a9cdfa3846a427335",
"text": "Computer technology has ushered in a new era of mass media, bringing with it great promise and great concerns about the effect on children's development and well-being. Although we tend to see these issues as being new, similar promises and concerns have accompanied each new wave of media technology throughout the past century: films in the early 1900s, radio in the 1920s, and television in the 1940s. With the introduction of each of these technologies, proponents touted the educational benefits for children, while opponents voiced fears about exposure to inappropriate commercial, sexual, and violent content. This article places current studies on children and computers in a historical context, noting the recurrent themes and patterns in media research during the twentieth century. Initial research concerning each innovation has tended to focus on issues of access and the amount of time children were spending with the new medium. As use of the technology became more prevalent, research shifted to issues related to content and its effects on children. Current research on children's use of computers is again following this pattern. But the increased level of interactivity now possible with computer games and with the communication features of the Internet has heightened both the promise of greatly enriched learning and the concerns related to increased risk of harm. As a result, research on the effects of exposure to various types of content has taken on a new sense of urgency. The authors conclude that to help inform and sustain the creation of more quality content for children, further research is needed on the effects of media on children, and new partnerships must be forged between industry, academia, and advocacy groups.",
"title": ""
},
{
"docid": "6f6636dcba42bea5f639e9006bfcd7e6",
"text": "In recent years, Singapore has increased its national emphasis on technology and engineering in early childhood education. Their newest initiative, the Playmaker Programme, has focused on teaching robotics and coding in preschool settings. Robotics offers a playful and collaborativeway for children to engagewith foundational technology and engineering concepts during their formative early childhood years. This study looks at a sample of preschool children (N = 98) from five early childhood centers in Singapore who completed a 7-week STEAM (Science, Technology, Engineering, Arts, and Mathematics) KIBO robotics curriculum in their classrooms called, ‘‘Dances from Around the World.’’ KIBO is a newly developed robotics kit that teaches both engineering and programming.KIBO’s actions are programmedusing tangible programming blocks—no screen-time required. Children’s knowledge of programming concepts were assessed upon completion of the curriculum using the Solve-Its assessment. Results indicate that children were highly successful at mastering foundational programming concepts. Additionally, teachers were successful at promoting a collaborative and creative environment, but less successful at finding ways to engage with the greater school community through robotics. This research study was part of a large country-wide initiative to increase the use of developmentally appropriate engineering tools in early childhood settings. Implications for the design of technology, curriculum, and other resources are addressed.",
"title": ""
}
] |
[
{
"docid": "d90954eaae0c9d84e261c6d0794bbf76",
"text": "The index case of the Ebola virus disease epidemic in West Africa is believed to have originated in Guinea. By June 2014, Guinea, Liberia, and Sierra Leone were in the midst of a full-blown and complex global health emergency. The devastating effects of this Ebola epidemic in West Africa put the global health response in acute focus for urgent international interventions. Accordingly, in October 2014, a World Health Organization high-level meeting endorsed the concept of a phase 2/3 clinical trial in Liberia to study Ebola vaccines. As a follow-up to the global response, in November 2014, the Government of Liberia and the US Government signed an agreement to form a research partnership to investigate Ebola and to assess intervention strategies for treating, controlling, and preventing the disease in Liberia. This agreement led to the establishment of the Joint Liberia-US Partnership for Research on Ebola Virus in Liberia as the beginning of a long-term collaborative partnership in clinical research between the two countries. In this article, we discuss the methodology and related challenges associated with the implementation of the Ebola vaccines clinical trial, based on a double-blinded randomized controlled trial, in Liberia.",
"title": ""
},
{
"docid": "2f7dd12e2bc56cddfa4b2dbd7e7a8c1a",
"text": "and the Alfred P. Sloan Foundation. Appleyard received support from the National Science Foundation under Grant No. 0438736. Jon Perr and Patrick Sullivan ably assisted with the interviews of Open Source Software leaders. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the above funding sources or any other individuals or organizations. Open Innovation and Strategy",
"title": ""
},
{
"docid": "b56c8ff2de33f0530793a534536b982e",
"text": "Recently, neural sequence-to-sequence (Seq2Seq) models have been applied to the problem of grapheme-to-phoneme (G2P) conversion. These models offer a straightforward way of modeling the conversion by jointly learning the alignment and translation of input to output tokens in an end-to-end fashion. However, until now this approach did not show improved error rates on its own compared to traditional joint-sequence based n-gram models for G2P. In this paper, we investigate how multitask learning can improve the performance of Seq2Seq G2P models. A single Seq2Seq model is trained on multiple phoneme lexicon datasets containing multiple languages and phonetic alphabets. Although multi-language learning does not show improved error rates, combining standard datasets and crawled data with different phonetic alphabets of the same language shows promising error reductions on English and German Seq2Seq G2P conversion. Finally, combining Seq2seq G2P models with standard n-grams based models yields significant improvements over using either model alone.",
"title": ""
},
{
"docid": "68aad74ce40e9f44997a078df5e54a23",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) based on the concept of traveling-wave excitation is presented. A lumped resistively loaded monofilar-spiral-slot is used to excite the rectangular DRA. The proposed DRA is theoretically and experimentally analyzed, including design concept, design guideline, parameter study, and experimental verification. It is found that by using such an excitation, a wide 3-dB axial-ratio (AR) bandwidth of 18.7% can be achieved.",
"title": ""
},
{
"docid": "e94c82f55fc40aba979435347c45c515",
"text": "INTRODUCTION\nWith the increased use of filler and fat injections for aesthetic purposes, there has been a corresponding increase in the incidence of complications. Vision loss as an uncommon but devastating vascular side effect of filler injections was the focus of this paper.\n\n\nMETHODS\nA review committee, consisting of plastic surgeons, aesthetic medical practitioners, ophthalmologists and dermatologists from Singapore, was convened by the Society of Aesthetic Medicine (Singapore) to review and recommend methods for the prevention and management of vision loss secondary to filler injections.\n\n\nRESULTS\nThe committee agreed that prevention through proper understanding of facial anatomy and good injection techniques was of foremost importance. The committee acknowledged that there is currently no standard management for these cases. Based on existing knowledge, injectors may follow a proposed course of action, which can be divided into immediate, definitive and supportive. The goals were to reduce intraocular pressure, dislodge the embolus to a more peripheral location, remove or reverse central ischaemia, preserve residual retinal function, and prevent the deterioration of vision. Dissolving a hyaluronic acid embolus remains a controversial option. It is proposed that injectors must be trained to recognise symptoms, institute immediate actions and refer patients without delay to dedicated specialists for definitive and supportive management.\n\n\nCONCLUSIONS\nSteps to prevent and manage vision loss based on current evidence and best clinical practices are outlined in this paper. Empirical referral to any emergency department or untrained doctors may lead to inordinate delays and poor outcomes for the affected eye.",
"title": ""
},
{
"docid": "ea33654bb04b06bae122fbded4b8df49",
"text": "The volume, veracity, variability, and velocity of data produced from the ever increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks, and open problems in the field of neuromemristive circuits for edge computing.",
"title": ""
},
{
"docid": "97ed18e26a80a2ae078f78c70becfe8c",
"text": "A fully-integrated 18.5 kHz RC time-constant-based oscillator is designed in 65 nm CMOS for sleep-mode timers in wireless sensors. A comparator offset cancellation scheme achieves 4× to 25× temperature stability improvement, leading to an accuracy of ±0.18% to ±0.55% over -40 to 90 °C. Sub-threshold operation and low-swing oscillations result in ultra-low power consumption of 130 nW. The architecture also provides timing noise suppression, leading to 10× reduction in long-term Allan deviation. It is measured to have a stability of 20 ppm or better for measurement intervals over 0.5 s. The oscillator also has a fast startup-time, with the period settling in 4 cycles.",
"title": ""
},
{
"docid": "5208762a8142de095c21824b0a395b52",
"text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.",
"title": ""
},
{
"docid": "6888b5311d7246c5eb18142d2746ec68",
"text": "Forms of well-being vary in their activation as well as valence, differing in respect of energy-related arousal in addition to whether they are negative or positive. Those differences suggest the need to refine traditional assumptions that poor person-job fit causes lower well-being. More activated forms of well-being were proposed to be associated with poorer, rather than better, want-actual fit, since greater motivation raises wanted levels of job features and may thus reduce fit with actual levels. As predicted, activated well-being (illustrated by job engagement) and more quiescent well-being (here, job satisfaction) were found to be associated with poor fit in opposite directions--positively and negatively, respectively. Theories and organizational practices need to accommodate the partly contrasting implications of different forms of well-being.",
"title": ""
},
{
"docid": "1e884329b92a4a0c2d4535e1f31e4f7b",
"text": "This paper presents a new photo collection page layout that attempts to maximize page coverage without having photos overlap. Layout is based on a hierarchical page partition, which provides explicit control over the aspect ratios and relative areas of the photos. We present an efficient method for finding a partition that produces a photo arrangement suitable for the shape of the page. Rather than relying on a stochastic search we employ a deterministic procedure that mimics the natural process of adding photos to the layout one by one.",
"title": ""
},
{
"docid": "db47da56df6cb45b97dd494714b994ca",
"text": "There has been a recent surge of interest in open source software development, which involves developers at many different locations and organizations sharing code to develop and refine programs. To an economist, the behavior of individual programmers and commercial companies engaged in open source projects is initially startling. This paper makes a preliminary exploration of the economics of open source software. We highlight the extent to which labor economics, especially the literature on “career concerns,” can explain many of these projects’ features. Aspects of the future of open source development process, however, remain somewhat difficult to predict with “offthe-shelf” economic models. Josh Lerner Jean Triole Harvard Business School Institut D'Economie Indutrielle (IDEI) Morgan Hall, Room 395 Manufacture des Tabacs MF529 Boston, MA 02163, 21 Allées de Brienne and NBER 31000 Toulouse Cedex FRANCE jlerner@hbs.edu tirole@cict.fr",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "f4aa2f0a40291ce9a1f0e5892a690be1",
"text": "We studied the similarities and differences between Brazilian Spiritistic mediums and North American dissociative identity disorder (DID) patients. Twenty-four mediums selected among different Spiritistic organizations in São Paulo, Brazil, were interviewed using the Dissociative Disorder Interview Schedule, and their responses were compared with those of DID patients described in the literature. The results from Spiritistic mediums were similar to published data on DID patients only with respect to female prevalence and high frequency of Schneiderian first-rank symptoms. As compared with individuals with DID, the mediums differed in having better social adjustment, lower prevalence of mental disorders, lower use of mental health services, no use of antipsychotics, and lower prevalence of histories of physical or sexual childhood abuse, sleepwalking, secondary features of DID, and symptoms of borderline personality. Thus, mediumship differed from DID in having better mental health and social adjustment, and a different clinical profile.",
"title": ""
},
{
"docid": "e5a2f6b8d6513c167b765672351ce2c8",
"text": "We present MAGEAD, a morphological analyzer and generator for the Arabic language family. Our work is novel in that it explicitly addresses the need for processing the morphology of the dialects. MAGEAD provides an analysis to a root+pattern representation, it has separate phonological and orthographic representations, and it allows for combining morphemes from different dialects.",
"title": ""
},
{
"docid": "b55d5967005d3b59063ffc4fd7eeb59a",
"text": "In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.",
"title": ""
},
{
"docid": "0e5eee72224a306f7f68fe1e9ea730e6",
"text": "The implementation of a hybrid fuel cell/battery system is proposed to improve the slow transient response of a fuel cell stack. This system can be used for an autonomous device with quick load variations. A suitable three-port, galvanic isolated, bidirectional power converter is proposed to control the power flow. An energy management method for the proposed three-port circuit is analyzed and implemented. Measurements from a 500-W laboratory prototype are presented to demonstrate the validity of the approach",
"title": ""
},
{
"docid": "08c0561471f8334e9b2a3aa70d12a9a4",
"text": "Increasing interest in JSON data has created a need for its efficient processing. Although JSON is a simple data exchange format, its querying is not always effective, especially in the case of large repositories of data. This work aims to integrate the JSONiq extension to the XQuery language specification into an existing query processor (Apache VXQuery) to enable it to query JSON data in parallel. VXQuery is built on top of Hyracks (a framework that generates parallel jobs) and Algebricks (a language-agnostic query algebra toolbox) and can process data on the fly, in contrast to other well-known systems which need to load data first. Thus, the extra cost of data loading is eliminated. In this paper, we implement three categories of rewrite rules which exploit the features of the above platforms to efficiently handle path expressions along with introducing intra-query parallelism. We evaluate our implementation using a large (803GB) dataset of sensor readings. Our results show that the proposed rewrite rules lead to efficient and scalable parallel processing of JSON data.",
"title": ""
},
{
"docid": "a4afaa67327ee6ddb8566e8e0da96e9f",
"text": "In this paper, a new face recognition technique is introduced based on the gray-level co-occurrence matrix (GLCM). GLCM represents the distributions of the intensities and the information about relative positions of neighboring pixels of an image. We proposed two methods to extract feature vectors using GLCM for face classification. The first method extracts the well-known Haralick features from the GLCM, and the second method directly uses GLCM by converting the matrix into a vector that can be used in the classification process. The results demonstrate that the second method, which uses GLCM directly, is superior to the first method that uses the feature vector containing the statistical Haralick features in both nearest neighbor and neural networks classifiers. The proposed GLCM based face recognition system not only outperforms well-known techniques such as principal component analysis and linear discriminant analysis, but also has comparable performance with local binary patterns and Gabor wavelets.",
"title": ""
},
{
"docid": "0476f978fe95ec6b1b96288d334f155c",
"text": "Outlier detection is an important task in data mining, with applications ranging from intrusion detection to human gait analysis. With the growing need to analyze high speed data streams, the task of outlier detection becomes even more challenging as traditional outlier detection techniques can no longer assume that all the data can be stored for processing. While the well-known Local Outlier Factor (LOF) algorithm has an incremental version, it assumes unbounded memory to keep all previous data points. In this paper, we propose a memory efficient incremental local outlier (MiLOF) detection algorithm for data streams, and a more flexible version (MiLOF_F), both have an accuracy close to Incremental LOF but within a fixed memory bound. Our experimental results show that both proposed approaches have better memory and time complexity than Incremental LOF while having comparable accuracy. In addition, we show that MiLOF_F is robust to changes in the number of data points, the number of underlying clusters and the number of dimensions in the data stream. These results show that MiLOF/MiLOF_F are well suited to application environments with limited memory (e.g., wireless sensor networks), and can be applied to high volume data streams.",
"title": ""
},
{
"docid": "bb5c4d59f598427ea1e2946ae74a7cc8",
"text": "In a nutshell: This course comprehensively covers important user experience (UX) evaluation methods as well as opportunities and challenges of UX evaluation in the area of entertainment and games. The course is an ideal forum for attendees to gain insight into state-of-the art user experience evaluation methods going way-beyond standard usability and user experience evaluation approaches in the area of human-computer interaction. It surveys and assesses the efforts of user experience evaluation of the gaming and human computer interaction communities during the last 15 years.",
"title": ""
}
] |
scidocsrr
|
443a1cc4b0621c7fda63dc8820264f9b
|
What¿s in a Like? Attitudes and behaviors around receiving Likes on Facebook
|
[
{
"docid": "821cefef9933d6a02ec4b9098f157062",
"text": "Scientists debate whether people grow closer to their friends through social networking sites like Facebook, whether those sites displace more meaningful interaction, or whether they simply reflect existing ties. Combining server log analysis and longitudinal surveys of 3,649 Facebook users reporting on relationships with 26,134 friends, we find that communication on the site is associated with changes in reported relationship closeness, over and above effects attributable to their face-to-face, phone, and email contact. Tie strength increases with both one-on-one communication, such as posts, comments, and messages, and through reading friends' broadcasted content, such as status updates and photos. The effect is greater for composed pieces, such as comments, posts, and messages than for 'one-click' actions such as 'likes.' Facebook has a greater impact on non-family relationships and ties who do not frequently communicate via other channels.",
"title": ""
},
{
"docid": "bb81541f9c87b51858ee76897e2a964e",
"text": "Five studies tested hypotheses derived from the sociometer model of self-esteem according to which the self-esteem system monitors others' reactions and alerts the individual to the possibility of social exclusion. Study 1 showed that the effects of events on participants' state self-esteem paralleled their assumptions about whether such events would lead others to accept or reject them. In Study 2, participants' ratings of how included they felt in a real social situation correlated highly with their self-esteem feelings. In Studies 3 and 4, social exclusion caused decreases in self-esteem when respondents were excluded from a group for personal reasons, but not when exclusion was random, but this effect was not mediated by self-presentation. Study 5 showed that trait self-esteem correlated highly with the degree to which respondents generally felt included versus excluded by other people. Overall, results provided converging evidence for the sociometer model.",
"title": ""
},
{
"docid": "d34d8dd7ba59741bb5e28bba3e870ac4",
"text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.",
"title": ""
}
] |
[
{
"docid": "66d45a44eaa7596a35f9afc4424362ec",
"text": "Agile methodologies are gaining popularity quickly, receiving increasing support from the software development community. Current requirements engineering practices have addressed traceability approaches for well defined phase-driven development models. Echo is a tool-based approach that provides for the implicit recording and management of relationships between conversations about requirements, specifications, and subsequent design decisions. By providing a means to capture requirements in an informal manner and later restructure the information to suit formal requirements specifications, Echo aims to solve the problems of applying traditional requirements engineering practices to agile development methods making available the demonstrated benefits of requirements traceability – a key enabler for large-scale change management.",
"title": ""
},
{
"docid": "5fd10b2277918255133f2e37a55e1103",
"text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.",
"title": ""
},
{
"docid": "a0e14f5c359de4aa8e7640cf4ff5effa",
"text": "In speech translation, we are faced with the problem of how to couple the speech recognition process and the translation process. Starting from the Bayes decision rule for speech translation, we analyze how the interaction between the recognition process and the translation process can be modelled. In the light of this decision rule, we discuss the already existing approaches to speech translation. None of the existing approaches seems to have addressed this direct interaction. We suggest two new methods, the local averaging approximation and the monotone alignments.",
"title": ""
},
{
"docid": "4933f3f3007dab687fc852e9c2b1ab0a",
"text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.",
"title": ""
},
{
"docid": "9572809d8416cc7b78683e3686e83740",
"text": "Lower-limb amputees have identified comfort and mobility as the two most important characteristics of a prosthesis. While these in turn depend on a multitude of factors, they are strongly influenced by the biomechanical performance of the prosthesis and the loading it imparts to the residual limb. Recent years have seen improvements in several prosthetic components that are designed to improve patient comfort and mobility. In this paper, we discuss two of these: VSAP and prosthetic foot-ankle systems; specifically, their mechanical properties and impact on amputee gait are presented.",
"title": ""
},
{
"docid": "e4a63070a6cc367454182dbc8c564188",
"text": "In this paper, we summarize hash functions and cellular automata based architectures, and discuss some pros and cons. We introduce the background knowledge of hash functions. The properties and theory of cellular automata are also presented with typical works. We show that cellular automata based schemes are very useful to design hash functions with a low hardware complexity because of its logical operation attributes and parallel properties.",
"title": ""
},
{
"docid": "4f3fe8ea0487690b4a8f61b488e96d53",
"text": "Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.",
"title": ""
},
{
"docid": "6ef52ad99498d944e9479252d22be9c8",
"text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.",
"title": ""
},
{
"docid": "3cb0e324a5eb310c386c6801b0bcf2d9",
"text": "BACKGROUND\nThe use of positive psychological interventions may be considered as a complementary strategy in mental health promotion and treatment. The present article constitutes a meta-analytical study of the effectiveness of positive psychology interventions for the general public and for individuals with specific psychosocial problems.\n\n\nMETHODS\nWe conducted a systematic literature search using PubMed, PsychInfo, the Cochrane register, and manual searches. Forty articles, describing 39 studies, totaling 6,139 participants, met the criteria for inclusion. The outcome measures used were subjective well-being, psychological well-being and depression. Positive psychology interventions included self-help interventions, group training and individual therapy.\n\n\nRESULTS\nThe standardized mean difference was 0.34 for subjective well-being, 0.20 for psychological well-being and 0.23 for depression indicating small effects for positive psychology interventions. At follow-up from three to six months, effect sizes are small, but still significant for subjective well-being and psychological well-being, indicating that effects are fairly sustainable. Heterogeneity was rather high, due to the wide diversity of the studies included. Several variables moderated the impact on depression: Interventions were more effective if they were of longer duration, if recruitment was conducted via referral or hospital, if interventions were delivered to people with certain psychosocial problems and on an individual basis, and if the study design was of low quality. Moreover, indications for publication bias were found, and the quality of the studies varied considerably.\n\n\nCONCLUSIONS\nThe results of this meta-analysis show that positive psychology interventions can be effective in the enhancement of subjective well-being and psychological well-being, as well as in helping to reduce depressive symptoms. Additional high-quality peer-reviewed studies in diverse (clinical) populations are needed to strengthen the evidence-base for positive psychology interventions.",
"title": ""
},
{
"docid": "c66933af0fef1bcd1c3df364e4e8bb77",
"text": "This study has its roots in a clinical application project, focusing on the development of a teaching-learning model enabling participants to understand compassion. During that project four clinical nursing teachers met for a total of 12 hours of experiential and reflective work. This study aimed at exploring participants' understanding of self-compassion as a source to compassionate care. It was carried out as a phenomenological and hermeneutic interpretation of participants' written and oral reflections on the topic. Data were interpreted in the light of Watson's Theory of Human Caring. Five themes were identified: Being there, with self and others; respect for human vulnerability; being nonjudgmental; giving voice to things needed to be said and heard; and being able to accept the gift of compassion from others. A main metaphorical theme, 'the Butterfly effect of Caring', was identified, addressing interdependency and the ethics of the face and hand when caring for Other - the ethical stance where the Other's vulnerable face elicits a call for compassionate actions. The findings reveal that the development of a compassionate self and the ability to be sensitive, nonjudgmental and respectful towards oneself contributes to a compassionate approach towards others. It is concluded that compassionate care is not only something the caregiver does, nor is compassion reduced to a way of being with another person or a feeling. Rather, it is a way of becoming and belonging together with another person where both are mutually engaged and where the caregiver compassionately is able to acknowledge both self and Other's vulnerability and dignity.",
"title": ""
},
{
"docid": "5acf0ddd47893967e21386d99316a2a9",
"text": "The Lucy-Richardson algorithm is a very well-known method for non-blind image deconvolution. It can also deal with space-variant problems, but it is seldom used in these cases because of its iterative nature and complexity of realization. In this paper we show that exploiting the sparse structure of the deconvolution matrix, and utilizing a specifically devised architecture, the restoration can be performed almost in real-time on VGA-size images.",
"title": ""
},
{
"docid": "77c2843058856b8d7a582d3b0349b856",
"text": "In this paper, an S-band dual circular polarized (CP) spherical conformal phased array antenna (SPAA) is designed. It has the ability to scan a beam within the hemisphere coverage. There are 23 elements uniformly arranged on the hemispherical dome. The design process of the SPAA is presented in detail. Three different kinds of antenna elements are compared. The gain of the SPAA is more than 13 dBi and the gain flatness is less than 1 dB within the scanning range. The measured result is consistent well with the simulated one.",
"title": ""
},
{
"docid": "ba65c99adc34e05cf0cd1b5618a21826",
"text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.",
"title": ""
},
{
"docid": "4cc71db87682a96ddee09e49a861142f",
"text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.",
"title": ""
},
{
"docid": "402f790d1b2bf76d6129cd08d995fade",
"text": "After briefly summarizing the mechanical design of the two joint prototypes for the new DLR variable compliance arm, the paper exemplifies the dynamic modelling of one of the prototypes and proposes a generic variable stiffness joint model for nonlinear control design. Based on this model, the design of a simple, gain scheduled state feedback controller for active vibration damping of the mechanically very weakly damped joint is presented. Moreover, the computation of the motor reference values out of the desired stiffness and position is addressed. Finally, simulation and experimental results validate the proposed methods.",
"title": ""
},
{
"docid": "480c8d16f3e58742f0164f8c10a206dd",
"text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.",
"title": ""
},
{
"docid": "e648b97ead434fa9daadaec7fa850fac",
"text": "Internet of Things (IoT) is now in its initial stage but very soon, it is going to influence almost every day-to-day items we use. The more it will be included in our lifestyle, more will be the threat of it being misused. There is an urgent need to make IoT devices secure from getting cracked. Very soon IoT is going to expand the area for the cyber-attacks on homes and businesses by transforming objects that were used to be offline into online systems. Existing security technologies are just not enough to deal with this problem. Blockchain has emerged as the possible solution for creating more secure IoT systems in the time to come. In this paper, first an overview of the blockchain technology and its implementation has been explained; then we have discussed the infrastructure of IoT which is based on Blockchain network and at last a model has been provided for the security of internet of things using blockchain.",
"title": ""
},
{
"docid": "621840a3c2637841b9da1e74c99e98f1",
"text": "Topic modeling is a type of statistical model for discovering the latent “topics” that occur in a collection of documents through machine learning. Currently, latent Dirichlet allocation (LDA) is a popular and common modeling approach. In this paper, we investigate methods, including LDA and its extensions, for separating a set of scientific publications into several clusters. To evaluate the results, we generate a collection of documents that contain academic papers from several different fields and see whether papers in the same field will be clustered together. We explore potential scientometric applications of such text analysis capabilities.",
"title": ""
},
{
"docid": "dd32d5b0b53c855081c23595052f10d8",
"text": "Yaumatei Dermatology Clinic, 12/F Yau Ma Tei Specialist Clinic Extension, 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 31-year-old Chinese male suffered from recalcitrant hidradenitis suppurativa for seven years causing disfiguring scars over the face and intertriginous areas, particularly the axillae and groins. Multiple medical treatments and surgical operation were tried but in vain. Infliximab infusion led to significant improvement. To our best knowledge, this is the first Chinese patient with hidradenitis suppurativa treated with infliximab in Hong Kong.",
"title": ""
},
{
"docid": "d597b9229a3f9a9c680d25180a4b6308",
"text": "Mental health problems are highly prevalent and increasing in frequency and severity among the college student population. The upsurge in mobile and wearable wireless technologies capable of intense, longitudinal tracking of individuals, provide enormously valuable opportunities in mental health research to examine temporal patterns and dynamic interactions of key variables. In this paper, we present an integrative framework for social anxiety and depression (SAD) monitoring, two of the most common disorders in the college student population. We have developed a smartphone application and the supporting infrastructure to collect both passive sensor data and active event-driven data. This supports intense, longitudinal, dynamic tracking of anxious and depressed college students to evaluate how their emotions and social behaviors change in the college campus environment. The data will provide critical information about how student mental health problems are maintained and, ultimately, how student patterns on campus shift following treatment.",
"title": ""
}
] |
scidocsrr
|
c3be4fd1bc47817400d941fdf0361fd8
|
Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning
|
[
{
"docid": "217e76cc7d8a7d680b40d5c658460513",
"text": "The reinforcement learning paradigm is a popular way to addr ess problems that have only limited environmental feedback, rather than correctly labeled exa mples, as is common in other machine learning contexts. While significant progress has been made t o improve learning in a single task, the idea oftransfer learninghas only recently been applied to reinforcement learning ta sks. The core idea of transfer is that experience gained in learning t o perform one task can help improve learning performance in a related, but different, task. In t his article we present a framework that classifies transfer learning methods in terms of their capab ilities and goals, and then use it to survey the existing literature, as well as to suggest future direct ions for transfer learning work.",
"title": ""
},
{
"docid": "da40bb860df97c61b11271db021a55b4",
"text": "We present Variable Influence Structure Analysis, or VISA, an algorithm that performs hierarchical decomposition of factored Markov decision processes. VISA uses a dynamic Bayesian network model of actions, and constructs a causal graph that captures relationships between state variables. In tasks with sparse causal graphs VISA exploits structure by introducing activities that cause the values of state variables to change. The result is a hierarchy of activities that together represent a solution to the original task. VISA performs state abstraction for each activity by ignoring irrelevant state variables and lower-level activities. In addition, we describe an algorithm for constructing compact models of the activities introduced. State abstraction and compact activity models enable VISA to apply efficient algorithms to solve the stand-alone subtask associated with each activity. Experimental results show that the decomposition introduced by VISA can significantly accelerate construction of an optimal, or near-optimal, policy.",
"title": ""
}
] |
[
{
"docid": "7d840ba451a7783aaa1abb040264e411",
"text": "The latest developments in mobile computing technology have changed user preferences for computing. However, in spite of all the advancements in the recent years, Smart Mobile Devices (SMDs) are still low potential computing devices which are limited in memory capacity, CPU speed and battery power lifetime. Therefore, Mobile Cloud Computing (MCC) employs computational offloading for enabling computationally intensive mobile applications on SMDs. However, state-of-the-art computational offloading frameworks lack of considering the additional overhead of components migration at runtime. Therefore resources intensive and energy consuming distributed application execution platform is established. This paper proposes a novel distributed Energy Efficient Computational Offloading Framework (EECOF) for the processing of intensive mobile applications in MCC. The framework focuses on leveraging application processing services of cloud datacenters with minimal instances of computationally intensive component migration at runtime. As a result, the size of data transmission and energy consumption cost is reduced in computational offloading for MCC. We evaluate the proposed framework by benchmarking prototype application in the real MCC environment. Analysis of the results show that by employing EECOF the size of data transmission over the wireless network medium is reduced by 84 % and energy consumption cost is reduced by 69.9 % in offloading different components of the prototype application. Hence, EECOF provides an energy efficient application layer solution for computational offloading in MCC.",
"title": ""
},
{
"docid": "0ecb00d99dc497a0e902cda198219dff",
"text": "Security vulnerabilities typically arise from bugs in input validation and in the application logic. Fuzz-testing is a popular security evaluation technique in which hostile inputs are crafted and passed to the target software in order to reveal bugs. However, in the case of SCADA systems, the use of proprietary protocols makes it difficult to apply existing fuzz-testing techniques as they work best when the protocol semantics are known, targets can be instrumented and large network traces are available. This paper describes a fuzz-testing solution involving LZFuzz, an inline tool that provides a domain expert with the ability to effectively fuzz SCADA devices.",
"title": ""
},
{
"docid": "b5f9d2f5c401be98b5e9546c0abaef22",
"text": "This paper describes a new approach for training generative adversarial networks (GAN) to understand the detailed 3D shape of objects. While GANs have been used in this domain previously, they are notoriously hard to train, especially for the complex joint data distribution over 3D objects of many categories and orientations. Our method extends previous work by employing the Wasserstein distance normalized with gradient penalization as a training objective. This enables improved generation from the joint object shape distribution. Our system can also reconstruct 3D shape from 2D images and perform shape completion from occluded 2.5D range scans. We achieve notable quantitative improvements in comparison to existing baselines.",
"title": ""
},
{
"docid": "017f0d1c89531bc3664a9504b0b70d30",
"text": "In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.",
"title": ""
},
{
"docid": "f16f22302df99de531a2406ef9e024db",
"text": "We propose a new hydrogenated amorphous silicon thin-film transistor (a-Si:H TFT) pixel circuit for an active matrix organic light-emitting diode (AMOLED) employing a voltage programming. The proposed a-Si:H TFT pixel circuit, which consists of five switching TFTs, one driving TFT, and one capacitor, successfully minimizes a decrease of OLED current caused by threshold voltage degradation of a-Si:H TFT and OLED. Our experimental results, based on the bias-temperature stress, exhibit that the output current for OLED is decreased by 7% in the proposed pixel, while it is decreased by 28% in the conventional 2-TFT pixel.",
"title": ""
},
{
"docid": "4e2c466fac826f5e32a51f09355d7585",
"text": "Congested networks involve complex traffic dynamics that can be accurately captured with detailed simulation models. However, when performing optimization of such networks the use of simulators is limited due to their stochastic nature and their relatively high evaluation cost. This has lead to the use of general-purpose analytical metamodels, that are cheaper to evaluate and easier to integrate within a classical optimization framework, but do not capture the specificities of the underlying congested conditions. In this paper, we argue that to perform efficient optimization for congested networks it is important to develop analytical surrogates specifically tailored to the context at hand so that they capture the key components of congestion (e.g. its sources, its propagation, its impact) while achieving a good tradeoff between realism and tractability. To demonstrate this, we present a surrogate that provides a detailed description of congestion by capturing the main interactions between the different network components while preserving analytical tractable. In particular, we consider the optimization of vehicle traffic in an urban road network. The proposed surrogate model is an approximate queueing network model that resorts to finite capacity queueing theory to account for congested conditions. Existing analytic queueing models for urban networks are formulated for a single intersection, and thus do not take into account the interactions between queues. The proposed model considers a set of intersections and analytically captures these interactions. We show that this level of detail is sufficient for optimization in the context of signal control for peak hour traffic. Although there is a great variety of signal control methodologies in the literature, there is still a need for solutions that are appropriate and efficient under saturated conditions, where the performance of signal control strategies and the formation and propagation of queues are strongly related. We formulate a fixed-time signal control problem where the network model is included as a set of constraints. We apply this methodology to a subnetwork of the Lausanne city center and use a microscopic traffic simulator to validate its performance. We also compare it with several other methods. As congestion increases, the new method leads to improved average performance measures. The results highlight the importance of taking the interaction between consecutive roads into account when deriving signal plans for congested urban road networks.",
"title": ""
},
{
"docid": "c87507ba36c0281351f27bdcd76c39a5",
"text": "Ear projection is an important goal to be achieved after stage two (ear elevation) in cases of microtia. This is a retrospective study conducted on patients with microtia who underwent staged reconstruction for the same. This study has been carried out over a period of 10 years with 211 patients. Dental impression compound was used as a splint after ear elevation and split skin grafting to maintain the projection of the ear. Projection of the ear was measured both pre- and post-procedure and at every follow-up using goniometer and photographic documentation was simultaneously done. Statistical analysis was performed using t-test. Patients were reviewed every month and splint was continued until 6 months post-surgery. The splint was very effective in maintaining the ear projection of more than 20(°) even after prolonged follow-up of upto 2 years. There were no complications associated with the splint application or prolonged use.",
"title": ""
},
{
"docid": "8dce23b10663fa65e47084b57103ef34",
"text": "This article presents a sobering view of the discipline of cognitive neuropsychology as practiced over the last three or four decades. Our judgment is that, although the study of abnormal cognition resulting from brain injury or disease in previously normal adults has produced a catalogue of fascinating and highly selective deficits, it has yielded relatively little advance in understanding how the brain accomplishes its cognitive business. We question the wisdom of the following three \"choices\" in mainstream cognitive neuropsychology: (a) single-case methodology, (b) dissociation between functions as the most important source of evidence, and (c) a central goal of diagramming the functional architecture of cognition rather than specifying how its components work. These choices may all stem from an excessive commitment to strict and fine-grained modularity. Although different brain regions are undoubtedly specialized for different functions, we argue that parallel and interactive processing is a better assumption about cognitive processing. The essential goal of specifying representations and processes can, we claim, be significantly assisted by computational modeling which, by its very nature, requires such specification.",
"title": ""
},
{
"docid": "0084faef0e08c4025ccb3f8fd50892f1",
"text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.",
"title": ""
},
{
"docid": "1a38695797b921e35e0987eeed11c95d",
"text": "We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state representations. Building on prior work by Jaeger and by Rivest and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the first specific formulation of the predictive idea that includes both stochasticity and actions (controls). We show that any system has a linear predictive state representation with number of predictions no greater than the number of states in its minimal POMDP model. In predicting or controlling a sequence of observations, the concepts of state and state estimation inevitably arise. There have been two dominant approaches. The generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations and estimates its state and state dynamics. The history-based approach, typified by k-order Markov methods, uses simple functions of past observations as state, that is, as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a history-based approach can only remember as far back as its history extends. The bane of generative-model approaches is that they are often strongly dependent on a good model of the system's dynamics. Most uses of POMDPs, for example, assume a perfect dynamics model and attempt only to estimate state. There are algorithms for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997). observations (and actions) (a) state 1-----1-----1..rep'n observations¢E (and actions) / state t/' rep'n 1-step --+ . delays",
"title": ""
},
{
"docid": "f5c69697719fe04f29bbdcb2efa9d160",
"text": "We propose that late modern policing practices, that rely on neighbourhood intelligence, the monitoring of tensions, surveillance and policing by accommo-dation, need to be augmented in light of emerging ‘cyber-neighbourhoods’, namely social media networks. The 2011 riots in England were the first to evidence the widespread use of social media platforms to organise and respond to disorder. The police were ill-equipped to make use of the intelligence emerging from these non-terrestrial networks and were found to be at a disadvantage to the more tech-savvy rioters and the general public. In this paper, we outline the development of the ‘tension engine’ component of the Cardiff Online Social Media ObServatroy (COSMOS). This engine affords users with the ability to monitor social media data streams for signs of high tension which can be analysed in order to identify deviations from the ‘norm’ (levels of cohesion/low tension). This analysis can be overlaid onto a palimpsest of curated data, such as official statistics about neighbourhood crime, deprivation and demography, to provide a multidimensional picture of the ‘terrestrial’ and ‘cyber’ streets. As a consequence, this ‘neighbourhood informatics’ enables a means of questioning official constructions of civil unrest through reference to the user-generated accounts of social media and their relationship to other, curated, social and economic data.",
"title": ""
},
{
"docid": "e9768df1b2a679e7d9e81588d4c2af02",
"text": "Over the last few decades, the electric utilities have seen a very significant increase in the application of metal oxide surge arresters on transmission lines in an effort to reduce lightning initiated flashovers, maintain high power quality and to avoid damages and disturbances especially in areas with high soil resistivity and lightning ground flash density. For economical insulation coordination in transmission and substation equipment, it is necessary to predict accurately the lightning surge overvoltages that occur on an electric power system.",
"title": ""
},
{
"docid": "b776bf3acb830552eb1ecf353b08edee",
"text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.",
"title": ""
},
{
"docid": "16564b3e5c3c9ececc2e3485d9f029ed",
"text": "Crowdsensing applications utilize the pervasive smartphone users to collect large-scale sensing data efficiently. The quality of sensing data depends on the participation of highly skilled users. To motivate these skilled users to participate, they should receive enough rewards for compensating their resource consumption. Available incentive mechanisms mainly consider the truthfulness of the mechanism, but mostly ignore the issues of security and privacy caused by a “trustful” center. In this paper, we propose a privacy-preserving blockchain incentive mechanism in crowdsensing applications, in which a cryptocurrency built on blockchains is used as a secure incentive way. High quality contributors will get their payments that are recorded in transaction blocks. The miners will verify the transaction according to the sensing data assessment criteria published by the server. As the transaction information can disclose users’ privacy, a node cooperation verification approach is proposed to achieve $k$ -anonymity privacy protection. Through theoretical analysis and simulation experiments, we show the feasibility and security of our incentive mechanism.",
"title": ""
},
{
"docid": "2b65d98894349b7be1aa5a57dad01517",
"text": "Despite the growing importance of exploratory search, information retrieval (IR) systems tend to focus on lookup search. Lookup searches are well served by optimising the precision and recall of search results, however, for exploratory search this may be counterproductive if users are unable to formulate an appropriate search query. We present a system called PULP that supports exploratory search for scientific literature, though the system can be easily adapted to other types of literature. PULP uses reinforcement learning (RL) to avert the user from context traps resulting from poorly chosen search queries, trading off between exploration (presenting the user with diverse topics) and exploitation (moving towards more specific topics). Where other RL-based systems suffer from the \"cold start\" problem, requiring sufficient time to adjust to a user's information needs, PULP initially presents the user with an overview of the dataset using temporal topic models. Topic models are displayed in an interactive alluvial diagram, where topics are shown as ribbons that change thickness with a given topics relative prevalence over time. Interactive, exploratory search sessions can be initiated by selecting topics as a starting point.",
"title": ""
},
{
"docid": "f7146deda98191a6f3cc824983ce9be4",
"text": "SMS, being an almost instantaneous communication medium that connects people, is now a phenomenon that has grown and spread around the globe at an amazing speed. Given the current trend of SMS usage and its potential growth, this paper will provide an insight of the extent to which how service quality and the value perceived by the SMS users have an impact on their extent of the SMS usage in the post SMS adoption phase. Specifically, this article will examine how service quality of the service providers and perceived value affect customer satisfaction and how customer satisfaction will affect their behavioural intention to continue to use SMS which in turn affects the extent of SMS usage in the local context. Using partial-least-squares, an analysis was conducted based on the 150 surveys collected to test for the proposed relationships. The results showed that the tangibles, empathy and assurance dimensions of service quality are antecedents of customer satisfaction and a positive relationship exists between customer satisfaction and customers’ behavioural intentions to continue to use SMS. Additionally, the positive relationship between customers’ behavioural intentions to continue to use SMS and the extent of SMS usage is also significant. These results were similar to the results shown by Cronin and Taylor (1992) studies. The perceived value/customer satisfaction relationship investigated in this research was in line with Fornell et al.(1996) and Cronin et al.(2000) where perceived value was one of the determinants of customer satisfaction. Specially, the results revealed that perceived value, together with tangibles, empathy and assurance aspects of the service quality, played an important role in determining customer satisfaction for SMS. Implications of the above results for research and practice are discussed.",
"title": ""
},
{
"docid": "f78a01a4337e2f2e7c3a6341d273f3e8",
"text": "We consider the problem of assigning stockkeeping units to distribution centers (DCs) belonging to different DC types of a retail network, e.g., central, regional, and local DCs. The problem is motivated by the real situation of a retail company and solved by an MIP solution approach. The MIP model reflects the interdependencies between inbound transportation, outbound transportation and instore logistics as well as capital tied up in inventories and differences in picking costs between the warehouses. A novel solution approach is developed and applied to a real-life case of a leading European grocery retail chain. The application of the new approach results in cost savings of 6% of total operational costs compared to the present assignment. These savings amount to several million euros per year. In-depth analyses of the results and sensitivity analyses provide insights into the solution structure and the major related issues.",
"title": ""
},
{
"docid": "2b9e29da5ee9abd3f0f7e18cea54ae4e",
"text": "This paper addresses video summarization, or the problem of distilling a raw video into a shorter form while still capturing the original story. We show that visual representations supervised by freeform language make a good fit for this application by extending a recent submodular summarization approach [9] with representativeness and interestingness objectives computed on features from a joint vision-language embedding space. We perform an evaluation on two diverse datasets, UT Egocentric [18] and TV Episodes [45], and show that our new objectives give improved summarization ability compared to standard visual features alone. Our experiments also show that the vision-language embedding need not be trained on domainspecific data, but can be learned from standard still image vision-language datasets and transferred to video. A further benefit of our model is the ability to guide a summary using freeform text input at test time, allowing user customization.",
"title": ""
}
] |
scidocsrr
|
b57e7ef726e3108e103204c40e636870
|
Design-oriented compact models for CNTFETs
|
[
{
"docid": "9c9cc117f7f8e09e6e6dbb9d62924a88",
"text": "We briefly review the electronic properties of carbon nanotubes (CNTs) and present results on the fabrication and characteristics of carbon nanotube field-effect transistors (CNTFETs) and simple integrated circuits. A novel approach allowing the catalyst-free synthesis of oriented CNTs is also presented.",
"title": ""
}
] |
[
{
"docid": "bf17acf28f242a0fd76117c9ef245f4d",
"text": "We present an algorithm to compute the silhouette set of a point cloud. Previous methods extract point set silhouettes by thresholding point normals, which can lead to simultaneous overand under-detection of silhouettes. We argue that additional information such as surface curvature is necessary to resolve these issues. To this end, we develop a local reconstruction scheme using Gabriel and intrinsic Delaunay criteria and define point set silhouettes based on the notion of a silhouette generating set. The mesh umbrellas, or local reconstructions of one-ring triangles surrounding each point sample, generated by our method enable accurate silhouette identification near sharp features and close-by surface sheets, and provide the information necessary to detect other characteristic curves such as creases and boundaries. We show that these curves collectively provide a sparse and intuitive visualization of point cloud data.",
"title": ""
},
{
"docid": "53c7f595760861008b09da459571de04",
"text": "The frequency range 25-45 GHz contains point-to-point bands at 28, 32, 38 and 42 GHz, potential implementation of 5G at 28 and 39 GHz and various military applications. Traditionally, short gate length GaAs pHEMT technology has been used to develop products for these frequencies. Alternatively, SiGe HBT technology may offer lower cost solutions for systems that can tolerate lower performance. This paper presents the design and measurements of a broadband SiGe receiver with a noise figure of 6 ± 1 dB over the entire 25 to 45 GHz bandwidth. The measured gain exceeds 20 dB from 26 to 43 GHz and, with gain control, is sufficiently linear to support 1024 QAM in the 35 to 45 GHz range. To the authors' knowledge, this the broadest bandwidth achieved to date with a SiGe receiver at these Ka or low millimetre-wave frequencies.",
"title": ""
},
{
"docid": "7366feb073496a728e1ef14e49a77001",
"text": "In this paper, we present a tool enhancement that allows an effective transition from the system level development phase to the software level development phase of a tool-supported safety engineering workflow aligned with the automotive functional safety standard ISO 26262. The tool enhancement has capabilities for model generation and code generation. Whereas the generation of Simulink models supports the development of application software, the configuration and generation of safety drivers supports the development of the basic software required for initialization, runtime fault detection and error handling. We describe the safety engineering workflow and its supporting tool chain including the tool enhancement. Moreover we demonstrate that the enhancement supports the transition from the system level development phase to the software level development phase using the case study of a hybrid electric vehicle development.",
"title": ""
},
{
"docid": "786a31d5c189c8376a08be6050ddbd9c",
"text": "In this article, we present a meta-analysis of research examining visibility of disability. In interrogating the issue of visibility and invisibility in the design of assistive technologies, we open a discussion about how perceptions surrounding disability can be probed through an examination of visibility and how these tensions do, and perhaps should, influence assistive technology design and research.",
"title": ""
},
{
"docid": "dd4edd271de8483fc3ce25f16763ffd1",
"text": "Computer vision is a rapidly evolving discipline. It includes methods for acquiring, processing, and understanding still images and video to model, replicate, and sometimes, exceed human vision and perform useful tasks.\n Computer vision will be commonly used for a broad range of services in upcoming devices, and implemented in everything from movies, smartphones, cameras, drones and more. Demand for CV is driving the evolution of image sensors, mobile processors, operating systems, application software, and device form factors in order to meet the needs of upcoming applications and services that benefit from computer vision. The resulting impetus means rapid advancements in:\n • visual computing performance\n • object recognition effectiveness\n • speed and responsiveness\n • power efficiency\n • video image quality improvement\n • real-time 3D reconstruction\n • pre-scanning for movie animation\n • image stabilization\n • immersive experiences\n • and more...\n Comprised of innovation leaders of computer vision, this panel will cover recent developments, as well as how CV will be enabled and used in 2016 and beyond.",
"title": ""
},
{
"docid": "0f4ac688367d3ea43643472b7d75ffc9",
"text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.",
"title": ""
},
{
"docid": "bf9ef1e84275ac77be0fd71334dde1ab",
"text": "The development of summarization research has been significantly hampered by the costly acquisition of reference summaries. This paper proposes an effective way to automatically collect large scales of news-related multi-document summaries with reference to social media’s reactions. We utilize two types of social labels in tweets, i.e., hashtags and hyper-links. Hashtags are used to cluster documents into different topic sets. Also, a tweet with a hyper-link often highlights certain key points of the corresponding document. We synthesize a linked document cluster to form a reference summary which can cover most key points. To this aim, we adopt the ROUGE metrics to measure the coverage ratio, and develop an Integer Linear Programming solution to discover the sentence set reaching the upper bound of ROUGE. Since we allow summary sentences to be selected from both documents and highquality tweets, the generated reference summaries could be abstractive. Both informativeness and readability of the collected summaries are verified by manual judgment. In addition, we train a Support Vector Regression summarizer on DUC generic multi-document summarization benchmarks. With the collected data as extra training resource, the performance of the summarizer improves a lot on all the test sets. We release this dataset for further research.",
"title": ""
},
{
"docid": "ab430a12088341758de5cde60ef26070",
"text": "BACKGROUND\nThe nonselective 5-HT(4) receptor agonists, cisapride and tegaserod have been associated with cardiovascular adverse events (AEs).\n\n\nAIM\nTo perform a systematic review of the safety profile, particularly cardiovascular, of 5-HT(4) agonists developed for gastrointestinal disorders, and a nonsystematic summary of their pharmacology and clinical efficacy.\n\n\nMETHODS\nArticles reporting data on cisapride, clebopride, prucalopride, mosapride, renzapride, tegaserod, TD-5108 (velusetrag) and ATI-7505 (naronapride) were identified through a systematic search of the Cochrane Library, Medline, Embase and Toxfile. Abstracts from UEGW 2006-2008 and DDW 2008-2010 were searched for these drug names, and pharmaceutical companies approached to provide unpublished data.\n\n\nRESULTS\nRetrieved articles on pharmacokinetics, human pharmacodynamics and clinical data with these 5-HT(4) agonists, are reviewed and summarised nonsystematically. Articles relating to cardiac safety and tolerability of these agents, including any relevant case reports, are reported systematically. Two nonselective 5-HT(4) agonists had reports of cardiovascular AEs: cisapride (QT prolongation) and tegaserod (ischaemia). Interactions with, respectively, the hERG cardiac potassium channel and 5-HT(1) receptor subtypes have been suggested to account for these effects. No cardiovascular safety concerns were reported for the newer, selective 5-HT(4) agonists prucalopride, velusetrag, naronapride, or for nonselective 5-HT(4) agonists with no hERG or 5-HT(1) affinity (renzapride, clebopride, mosapride).\n\n\nCONCLUSIONS\n5-HT(4) agonists for GI disorders differ in chemical structure and selectivity for 5-HT(4) receptors. Selectivity for 5-HT(4) over non-5-HT(4) receptors may influence the agent's safety and overall risk-benefit profile. Based on available evidence, highly selective 5-HT(4) agonists may offer improved safety to treat patients with impaired GI motility.",
"title": ""
},
{
"docid": "3bba595fa3a3cd42ce9b3ca052930d55",
"text": "After about a decade of intense research, spurred by both economic and operational considerations, and by environmental concerns, energy efficiency has now become a key pillar in the design of communication networks. With the advent of the fifth generation of wireless networks, with millions more base stations and billions of connected devices, the need for energy-efficient system design and operation will be even more compelling. This survey provides an overview of energy-efficient wireless communications, reviews seminal and recent contribution to the state-of-the-art, including the papers published in this special issue, and discusses the most relevant research challenges to be addressed in the future.",
"title": ""
},
{
"docid": "f3f3aec72786299f3ef885e4b862ca2b",
"text": "This paper presents the method that underlies our submission to the untrimmed video classification task of ActivityNet Challenge 2016. We follow the basic pipeline of temporal segment networks [ 16] and further raise the performance via a number of other techniques. Specifically, we use the latest deep model architecture, e.g., ResNet and Inception V3, and introduce new aggregation schemes (top-k and attention-weighted pooling). Additionally, we incorp rate the audio as a complementary channel, extracting relevant information via a CNN applied to the spectrograms. With these techniques, we derive an ensemble of deep models, which, together, attains a high classification accurac y (mAP93.23%) on the testing set and secured the first place in the challenge.",
"title": ""
},
{
"docid": "de7eb0735d6cd2fb13a00251d89b0fbc",
"text": "Classical conditioning, the simplest form of associative learning, is one of the most studied paradigms in behavioural psychology. Since the formal description of classical conditioning by Pavlov, lesion studies in animals have identified a number of anatomical structures involved in, and necessary for, classical conditioning. In the 1980s, with the advent of functional brain imaging techniques, particularly positron emission tomography (PET), it has been possible to study the functional anatomy of classical conditioning in humans. The development of functional magnetic resonance imaging (fMRI)--in particular single-trial or event-related fMRI--has now considerably advanced the potential of neuroimaging for the study of this form of learning. Recent event-related fMRI and PET studies are adding crucial data to the current discussion about the putative role of the amygdala in classical fear conditioning in humans.",
"title": ""
},
{
"docid": "24d2ad857f66f9bd32405bf1de7cadcf",
"text": "Evidence linked exposure to internet appearance-related sites to weight dissatisfaction, drive for thinness, increased internalisation of thin ideals, and body surveillance with Facebook users having significantly higher scores on body image concern measures (Tiggemann & Miller, 2010, Tiggemann & Slater, 2013). This study explored the impacts of social media on the body image of young adults aged 18-25 years. A total of 300 students from a Victorian university completed a survey including questions about the use of social media and 2 measures of body image: The Objectified Body Consciousness and both female and male version of the Sociocultural Attitudes towards Appearance Questionnaire 3. Results showed participants mostly used Facebook to keep in touch with friends and family. While using social media, they felt pressure to lose weight, look more attractive or muscular, and to change their appearance. Correlations were found between Instagram and concerns with body image and body surveillance, between Pinterest and body shame and appearance control beliefs and between Facebook and Pinterest and perceived pressure. Findings contribute to the growing body of knowledge about the influence of social media on body image and new information for the development of social media literacy programs addressing negative body image.",
"title": ""
},
{
"docid": "7cc991d640c4626c8b14ec1e2d497cac",
"text": "The increasing use of mobile devices has triggered the development of location based services (LBS). By providing location information to LBS, mobile users can enjoy variety of useful applications utilizing location information, but might suffer the troubles of private information leakage. Location information of mobile users needs to be kept secret while maintaining utility to achieve desirable service quality. Existing location privacy enhancing techniques based on K-anonymity and Hilbertcurve cloaking area generation showed advantages in privacy protection and service quality but disadvantages due to the generation of large cloaking areas that makes query processing and communication less effective. In this paper we propose a novel location privacy preserving scheme that leverages some differential privacy based notions and mechanisms to publish the optimal size cloaking areas from multiple rotated and shifted versions of Hilbert curve. With experimental results, we show that our scheme significantly reduces the average size of cloaking areas compared to previous Hilbert curve method. We also show how to quantify adversary's ability to perform an inference attack on user location data and how to limit adversary's success rate under a designed threshold.",
"title": ""
},
{
"docid": "ed35d80dd3af3acbe75e5122b2378756",
"text": "We present a system whereby the human voice may specify continuous control signals to manipulate a simulated 2D robotic arm and a real 3D robotic arm. Our goal is to move towards making accessible the manipulation of everyday objects to individuals with motor impairments. Using our system, we performed several studies using control style variants for both the 2D and 3D arms. Results show that it is indeed possible for a user to learn to effectively manipulate real-world objects with a robotic arm using only non-verbal voice as a control mechanism. Our results provide strong evidence that the further development of non-verbal voice controlled robotics and prosthetic limbs will be successful.",
"title": ""
},
{
"docid": "5ff7a82ec704c8fb5c1aa975aec0507c",
"text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.",
"title": ""
},
{
"docid": "10b65d46a5a9dcc8b049804866122b68",
"text": "We present a novel bioinspired dynamic climbing robot, with a recursive name: ROCR is an oscillating climbing robot. ROCR, pronounced “Rocker,” is a pendular, two-link, serial-chain robot that utilizes alternating handholds and an actuated tail to propel itself upward in a climbing style based on observation of human climbers and brachiating gibbons. ROCR's bioinspired pendular climbing strategy is simple and efficient. In fact, to our knowledge, ROCR is also the first climbing robot that is designed for efficiency. ROCR is a lightweight, flexible, and self-contained robot. This robot is intended for autonomous surveillance and inspection on sheer vertical surfaces. Potential locomotion gait strategies were investigated in simulation using Working Model 2D, and were evaluated on a basis of climbing rate, energy efficiency, and whether stable open-loop climbing was achieved. We identified that the most effective climbing resulted from sinusoidal tail motions. The addition of a body stabilizer reduced the robot's out-of-plane motion at higher frequencies and promoted more reliable gripper attachment. Experimental measurements of the robot showed climbing efficiencies of over 20% and a specific resistance of 5.0, while consuming 27 J/m at a maximum climbing speed of 15.7 cm/s (0.34 body lengths/s) - setting a first benchmark for efficiency of climbing robots. Future work will include further design optimization, integration of more complex gripping mechanisms, and investigating more complex control strategies.",
"title": ""
},
{
"docid": "ccfd4e40d7d0225c63869170e0851c2d",
"text": "...........................................................................................................................7 Chapter 1: Introduction ................................................................................................8 1.1. Motivation ..................................................................................................................9 1.2. Problem Hypothesis .................................................................................................11 1.3. Contributions............................................................................................................11 1.4. Thesis Outline ..........................................................................................................11 Chapter 2 : Background ...............................................................................................13 2.1. Social Networks .......................................................................................................13 2.2. Implicit versus Explicit Social Network Formations ...............................................14 2.3. Social Network Analysis versus Dynamic Network Analysis .................................15 2.4. Related Research ......................................................................................................16 2.5. Social Bookmarking and Digg Network ..................................................................18 Chapter 3: User Characterization ..............................................................................21 3.1. Network Description and Statistics ..........................................................................21 3.2. Degree Distribution ..................................................................................................22 3.3. Egonet Analysis .......................................................................................................23 3.4. User Membership Analysis ......................................................................................27 Chapter 4: Comparative Analysis of Network Formations .....................................31 4.",
"title": ""
},
{
"docid": "03bd81d3c50b81c6cfbae847aa5611f6",
"text": "We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures. We address this challenge by combining 3D tracking with 3D pose detection. This combination not only automates the whole process but also significantly improves the robustness and accuracy of the system. Our whole algorithm is highly parallel and is therefore easily implemented on a GPU. We demonstrate the power of our approach by capturing a wide range of human movements in real time and achieve state-of-the-art accuracy in our comparison against alternative systems such as Kinect [2012].",
"title": ""
},
{
"docid": "a4059636cbdc058e3f3a7621155c68b7",
"text": "A <italic>K</italic>-d tree represents a set of <italic>N</italic> points in <italic>K</italic>-dimensional space. Operations on a <italic>semidynamic</italic> tree may delete and undelete points, but may not insert new points. This paper shows that several operations that require <italic>&Ogr;</italic>(log <italic>N</italic>) expected time in general <italic>K</italic>-d trees may be performed in constant expected time in semidynamic trees. These operations include deletion, undeletion, nearest neighbor searching, and fixed-radius near neighbor searching (the running times of the first two are proved, while the last two are supported by experiments and heuristic arguments). Other new techniques can also be applied to general <italic>K</italic>-d trees: simple sampling reduces the time to build a tree from <italic>&Ogr;</italic>(<italic>KN</italic> log <italic>N</italic>) to <italic>&Ogr;</italic>(<italic>KN</italic> + <italic>N</italic> log <italic>N</italic>), and more advanced sampling builds a robust tree in the same time. The methods are straightforward to implement, and lead to a data structure that is significantly faster and less vulnerable to pathological inputs than ordinary <italic>K</italic>-d trees.",
"title": ""
},
{
"docid": "f11ff738aaf7a528302e6ec5ed99c43c",
"text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.",
"title": ""
}
] |
scidocsrr
|
6fd935251f9ba9b4dabc3f3899be839f
|
A multivariate regression approach to association analysis of a quantitative trait network
|
[
{
"docid": "4ead23d450994648b3c6bbb91e25fd32",
"text": "Much of a cell's activity is organized as a network of interacting modules: sets of genes coregulated to respond to different conditions. We present a probabilistic method for identifying regulatory modules from gene expression data. Our procedure identifies modules of coregulated genes, their regulators and the conditions under which regulation occurs, generating testable hypotheses in the form 'regulator X regulates module Y under conditions W'. We applied the method to a Saccharomyces cerevisiae expression data set, showing its ability to identify functionally coherent modules and their correct regulators. We present microarray experiments supporting three novel predictions, suggesting regulatory roles for previously uncharacterized proteins.",
"title": ""
}
] |
[
{
"docid": "94a6106cac2ecd3362c81fc6fd93df28",
"text": "We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as contextfree languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.",
"title": ""
},
{
"docid": "54ab143dc18413c58c20612dbae142eb",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "278ec426c504828f1f13e1cf1ce50e39",
"text": "Information retrieval, IR, is the science of extracting information from documents. It can be viewed in a number of ways: logical, probabilistic and vector space models are some of the most important. In this book, the author, one of the leading researchers in the area, shows how these three views can be combined in one mathematical framework, the very one used to formulate the general principles of quantum mechanics. Using this framework, van Rijsbergen presents a new theory for the foundations of IR, in particular a new theory of measurement. He shows how a document can be represented as a vector in Hilbert space, and the document’s relevance by an Hermitian operator. All the usual quantum-mechanical notions, such as uncertainty, superposition and observable, have their IR-theoretic analogues. But the approach is more than just analogy: the standard theorems can be applied to address problems in IR, such as pseudo-relevance feedback, relevance feedback and ostensive retrieval. The relation with quantum computing is also examined. To help keep the book self-contained, appendices with background material on physics and mathematics are included, and each chapter ends with some suggestions for further reading. This is an important book for all those working in IR, AI and natural language processing.",
"title": ""
},
{
"docid": "f14daee1ddf6bbf4f3d41fe6ef5fcdb6",
"text": "A characteristic that will distinguish successful manufacturing enterprises of the next millennium is agility: the ability to respond quickly, proactively, and aggressively to unpredictable change. The use of extended virtual enterprise Supply Chains (SC) to achieve agility is becoming increasingly prevalent. A key problem in constructing effective SCs is the lack of methods and tools to support the integration of processes and systems into shared SC processes and systems. This paper describes the architecture and concept of operation of the Supply Chain Process Design Toolkit (SCPDT), an integrated software system that addresses the challenge of seamless and efficient integration. The SCPDT enables the analysis and design of Supply Chain (SC) processes. SCPDT facilitates key SC process engineering tasks including 1) AS-IS process base-lining and assessment, 2) collaborative TO-BE process requirements definition, 3) SC process integration and harmonization, 4) TO-BE process design trade-off analysis, and 5) TO-BE process planning and implementation.",
"title": ""
},
{
"docid": "37ead2d23df0af074800e7d2220ef950",
"text": "This study aimed to better understand the psychological mechanisms, referred to in the job demands–resources model as the energetic and motivational processes, that can explain relationships between job demands (role overload and ambiguity), job resources (job control and social support), and burnout (emotional exhaustion, depersonalization, and personal accomplishment). Drawing on self-determination theory, we examined whether psychological resources (perceived autonomy, competence, and relatedness) act as specific mediators between particular job demands and burnout as well as between job resources and burnout. Participants were 356 school board employees. Results of the structural equation analyses provide support for our hypothesized model, which proposes that certain job demands and resources are involved in both the energetic and motivational processes—given their relationships with psychological resources—and that they distinctively predict burnout components. Implications for burnout research and management practices are discussed.",
"title": ""
},
{
"docid": "d40e565a2ed22af998ae60f670210f57",
"text": "Research on human infants has begun to shed light on early-develpping processes for segmenting perceptual arrays into objects. Infants appear to perceive objects by analyzing three-dimensional surface arrangements and motions. Their perception does not accord with a general tendency to maximize figural goodness or to attend-to nonaccidental geometric relations in visual arrays. Object perception does accord with principles governing the motions of material bodies: Infants divide perceptual arrays into units that move as connected wholes, that move separately from one another, that tend to maintain their size and shape over motion, and that tend to act upon each other only on contact. These findings suggest that o general representation of object unity and boundaries is interposed between representations of surfaces and representations of obiects of familiar kinds. The processes that construct this representation may be related to processes of physical reasoning. This article is animated by two proposals about perception and perceptual development. One proposal is substantive: In situations where perception develops through experience, but without instruction or deliberate reflection , development tends to enrich perceptual abilities but not to change them fundamentally. The second proposal is methodological: In the above situations , studies of the origins and early development of perception can shed light on perception in its mature state. These proposals will arise from a discussion of the early development of one perceptual ability: the ability to organize arrays of surfaces into unitary, bounded, and persisting objects. PERCEIVING OBJECTS In recent years, my colleagues and I have been studying young infants' perception of objects in complex displays in which objects are adjacent to other objects, objects are partly hidden behind other objects, of objects move fully",
"title": ""
},
{
"docid": "8fd38494bb2e4ffcefc203c88d9605e7",
"text": "The aim of the present study is to provide a detailed macroscopic mapping of the palatal and tuberal blood supply applying anatomical methods and studying specific anastomoses to bridge the gap between basic structural and empirical clinical knowledge. Ten cadavers (three dentate, seven edentulous) have been prepared for this study in the Department of Anatomy, Semmelweis University, Budapest, Hungary, and in the Department of Anatomy of the Medical University of Graz. All cadavers were fixed with Thiel’s solution. For the macroscopic analysis of the blood vessels supplying the palatal mucosa, corrosion casting in four cadavers and latex milk injection in other six cadavers were performed. We recorded major- and secondary branches of the greater palatine artery (GPA) and its relation to the palatine spine, different anastomoses with the nasopalatine artery (NPA), and lesser palatal artery (LPA) as well as with contralateral branches of the GPA. Penetrating intraosseous branches at the premolar-canine area were also detected. In edentulous patients, the GPA developed a curvy pathway in the premolar area. The blood supply around the maxillary tuberosity was also presented. The combination of different staining methods has shed light to findings with relevance to palatal blood supply, offering a powerful tool for the design and execution of surgical interventions involving the hard palate. The present study provides clinicians with a good basis to understand the anatomical background of palatal and tuberal blood supply. This might enable clinicians to design optimized incision- and flap designs. As a result, the risk of intraoperative bleeding and postoperative wound healing complications related to impaired blood supply can be minimized.",
"title": ""
},
{
"docid": "efc4af51a92facff03e1009b039139fe",
"text": "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.",
"title": ""
},
{
"docid": "bcb82adbb207ce6dc41b9fdee5425472",
"text": "Non-guaranteed display advertising (NGD) is a multi-billion dollar business that has been growing rapidly in recent years. Advertisers in NGD sell a large portion of their ad campaigns using performance dependent pricing models such as cost-per-click (CPC) and cost-per-action (CPA). An accurate prediction of the probability that users click on ads is a crucial task in NGD advertising because this value is required to compute the expected revenue. State-of-the-art prediction algorithms rely heavily on historical information collected for advertisers, users and publishers. Click prediction of new ads in the system is a challenging task due to the lack of such historical data. The objective of this paper is to mitigate this problem by integrating multimedia features extracted from display ads into the click prediction models. Multimedia features can help us capture the attractiveness of the ads with similar contents or aesthetics. In this paper we evaluate the use of numerous multimedia features (in addition to commonly used user, advertiser and publisher features) for the purposes of improving click prediction in ads with no history. We provide analytical results generated over billions of samples and demonstrate that adding multimedia features can significantly improve the accuracy of click prediction for new ads, compared to a state-of-the-art baseline model.",
"title": ""
},
{
"docid": "4bd8d8b7318db4e10c2c23c281d0cdca",
"text": "Industrial fog computing deploys various industrial services, such as automatic monitoring/control and imminent failure detection, at the fog nodes (FNs) to improve the performance of industrial systems. Much effort has been made in the literature on the design of fog network architecture and computation offloading. This paper studies an equally important but much less investigated problem of service hosting where FNs are adaptively configured to host services for sensor nodes (SNs), thereby enabling corresponding tasks to be executed by the FNs. The problem of service hosting emerges because of the limited computational and storage resources at FNs, which limit the number of different types of services that can be hosted by an FN at the same time. Considering the variability of service demand in both temporal and spatial dimensions, when, where, and which services to host have to be judiciously decided to maximize the utility of the fog computing network. Our proposed fog configuration strategies are tailored to battery-powered FNs. The limited battery capacity of FNs creates a long-term energy budget constraint that significantly complicates the fog configuration problem as it introduces temporal coupling of decision making across the timeline. To address all these challenges, we propose an online distributed algorithm, called adaptive fog configuration (AFC), based on Lyapunov optimization and parallel Gibbs sampling. AFC jointly optimizes service hosting and task admission decisions, requiring only currently available system information while guaranteeing close-to-optimal performance compared to an oracle algorithm with full future information.",
"title": ""
},
{
"docid": "ff4a3a0c5288c69023c0d97a32ee5d6a",
"text": "1 We present a software tool for simulations of flow and multi‐component solute transport in 2 two and three‐dimensional domains in combination with comprehensive intra‐phase and 3 inter‐phase geochemistry. The software uses IPhreeqc as a reaction engine to the multi‐ 4 purpose, multidimensional finite element solver COMSOL Multiphysics® for flow and 5 transport simulations. Here we used COMSOL to solve Richards' equation for aqueous phase 6 flow in variably saturated porous media. The coupling procedure presented is in principle 7 applicable to any simulation of aqueous phase flow and solute transport in COMSOL. The 8 coupling with IPhreeqc gives major advantages over COMSOL's built‐in reaction capabilities, 9 i.e., the soil solution is speciated from its element composition according to thermodynamic 10 mass action equations with ion activity corrections. State‐of‐the‐art adsorption models such 11 as surface complexation with diffuse double layer calculations are accessible. In addition, 12 IPhreeqc provides a framework to integrate user‐defined kinetic reactions with possible 13 dependencies on solution speciation (i.e., pH, saturation indices, and ion activities), allowing 14 for modelling of microbially mediated reactions. Extensive compilations of geochemical 15 reactions and their parameterization are accessible through associated databases. Research highlights 20 Coupling of COMSOL and PHREEQC facilitates simulation of variably saturated flow 21 with comprehensive geochemical reactions. 22 The use of finite elements allows for the simulation of flow and solute transport in 23 complex 2 and 3D domains. 24 Geochemical reactions are coupled via sequential non‐iterative operator splitting. 25 The software tool provides novel capabilities for investigations of contaminant 26 behaviour in variably saturated porous media and agricultural management. 27 3 Software requirements 28 COMSOL Multiphysics® including Earth Science Module (tested version: 3.5a; due to a 29 memory leak in versions 4.0 and 4.0a, these are not suitable for the presented coupling) 30 Price for single user academic license including Earth Science Module ca. 2000 € 31 Matlab® (tested versions: 7.9, 7.10) 32 Price for single user academic license including Parallel Computing Toolbox ca. 650 € 33 IPhreeqc (COM‐version, available free of charge at 34 The coupling files together with animations of the presented simulations are available at 36",
"title": ""
},
{
"docid": "815feed9cce2344872c50da6ffb77093",
"text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.",
"title": ""
},
{
"docid": "7c0586335facd8388814f863e19e3d06",
"text": "OBJECTIVE\nWe reviewed randomized controlled trials of complementary and alternative medicine (CAM) treatments for depression, anxiety, and sleep disturbance in nondemented older adults.\n\n\nDATA SOURCES\nWe searched PubMed (1966-September 2006) and PsycINFO (1984-September 2006) databases using combinations of terms including depression, anxiety, and sleep; older adult/elderly; randomized controlled trial; and a list of 56 terms related to CAM.\n\n\nSTUDY SELECTION\nOf the 855 studies identified by database searches, 29 met our inclusion criteria: sample size >or= 30, treatment duration >or= 2 weeks, and publication in English. Four additional articles from manual bibliography searches met inclusion criteria, totaling 33 studies.\n\n\nDATA EXTRACTION\nWe reviewed identified articles for methodological quality using a modified Scale for Assessing Scientific Quality of Investigations (SASQI). We categorized a study as positive if the CAM therapy proved significantly more effective than an inactive control (or as effective as active control) on at least 1 primary psychological outcome. Positive and negative studies were compared on the following characteristics: CAM treatment category, symptom(s) assessed, country where the study was conducted, sample size, treatment duration, and mean sample age.\n\n\nDATA SYNTHESIS\n67% of the 33 studies reviewed were positive. Positive studies had lower SASQI scores for methodology than negative studies. Mind-body and body-based therapies had somewhat higher rates of positive results than energy- or biologically-based therapies.\n\n\nCONCLUSIONS\nMost studies had substantial methodological limitations. A few well-conducted studies suggested therapeutic potential for certain CAM interventions in older adults (e.g., mind-body interventions for sleep disturbances and acupressure for sleep and anxiety). More rigorous research is needed, and suggestions for future research are summarized.",
"title": ""
},
{
"docid": "26bc2aa9b371e183500e9c979c1fff65",
"text": "Complex regional pain syndrome (CRPS) is clinically characterized by pain, abnormal regulation of blood flow and sweating, edema of skin and subcutaneous tissues, trophic changes of skin, appendages of skin and subcutaneous tissues, and active and passive movement disorders. It is classified into type I (previously reflex sympathetic dystrophy) and type II (previously causalgia). Based on multiple evidence from clinical observations, experimentation on humans, and experimentation on animals, the hypothesis has been put forward that CRPS is primarily a disease of the central nervous system. CRPS patients exhibit changes which occur in somatosensory systems processing noxious, tactile and thermal information, in sympathetic systems innervating skin (blood vessels, sweat glands), and in the somatomotor system. This indicates that the central representations of these systems are changed and data show that CRPS, in particular type I, is a systemic disease involving these neuronal systems. This way of looking at CRPS shifts the attention away from interpreting the syndrome conceptually in a narrow manner and to reduce it to one system or to one mechanism only, e. g., to sympathetic-afferent coupling. It will further our understanding why CRPS type I may develop after a trivial trauma, after a trauma being remote from the affected extremity exhibiting CRPS, and possibly after immobilization of an extremity. It will explain why, in CRPS patients with sympathetically maintained pain, a few temporary blocks of the sympathetic innervation of the affected extremity sometimes lead to long-lasting (even permanent) pain relief and to resolution of the other changes observed in CRPS. This changed view will bring about a diagnostic reclassification and redefinition of CRPS and will have bearings on the therapeutic approaches. Finally it will shift the focus of research efforts.",
"title": ""
},
{
"docid": "4c39b9a4e9822fb6d0a000c55d71faa5",
"text": "Suicidal decapitation is seldom encountered in forensic medicine practice. This study reports the analysis of a suicide committed by a 31-year-old man with a self-fabricated guillotine. The construction of the guillotine was very interesting and sophisticated. The guillotine-like blade with additional weight was placed in a large metal frame. The movement of the blade was controlled by the frame rails. The steel blade was triggered by a tensioned rubber band after releasing the safety catch. The cause of death was immediate exsanguination after complete severance of the neck. The suicide motive was most likely emotional distress after the death of his father. In medico-legal literature, there has been only one similar case of suicidal complete decapitation by a guillotine described.",
"title": ""
},
{
"docid": "0e9e6c1f21432df9dfac2e7205105d46",
"text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.",
"title": ""
},
{
"docid": "893f631e0a0ca9851097bc54a14b1ea8",
"text": "Thirteen subjects detected noise burst targets presented in a white noise background at a mean rate of 10/min. Within each session, local error rate, defined as the fraction of targets detected in a 33 sec moving window, fluctuated widely. Mean coherence between slow mean variations in EEG power and in local error rate was computed for each EEG frequency and performance cycle length, and was shown by a Monte Carlo procedure to be significant for many EEG frequencies and performance cycle lengths, particularly in 4 well-defined EEG frequency bands, near 3, 10, 13, and 19 Hz, and at higher frequencies in two cycle length ranges, one longer than 4 min and the other near 90 sec/cycle. The coherence phase plane contained a prominent phase reversal near 6 Hz. Sorting individual spectra by local error rate confirmed the close relation between performance and EEG power and its relative within-subject stability. These results show that attempts to maintain alertness in an auditory detection task result in concurrent minute and multi-minute scale fluctuations in performance and the EEG power spectrum.",
"title": ""
},
{
"docid": "0326178ab59983db61eb5dfe0e2b25a4",
"text": "Article history: Received 9 September 2008 Received in revised form 16 April 2009 Accepted 14 May 2009",
"title": ""
},
{
"docid": "67d25f3ac24786079acc868492000842",
"text": "Recent developments in the area of deep learning have been proved extremely beneficial for several natural language processing tasks, such as sentiment analysis, question answering, and machine translation. In this paper we exploit such advances by tailoring the ontology learning problem as a transductive reasoning task that learns to convert knowledge from natural language to a logic-based specification. More precisely, using a sample of definitory sentences generated starting by a synthetic grammar, we trained Recurrent Neural Network (RNN) based architectures to extract OWL formulae from text. In addition to the low feature engineering costs, our system shows good generalisation capabilities over the lexicon and the syntactic structure. The encouraging results obtained in the paper provide a first evidence of the potential of deep learning techniques towards long term ontology learning challenges such as improving domain independence, reducing engineering costs, and dealing with variable language forms.",
"title": ""
},
{
"docid": "c875bfed84555d5a32a32e39a703e703",
"text": "For mmWave directional air interface expected in 5G communications, current discontinuous reception (DRX) mechanisms would be inadequate. Beam searching, for alignment of beams at User Equipment (UE) and 5G base station (NR nodeB), cannot be avoided in directional communication. We propose to exploit dual connectivity of UE, to both LTE eNB and NR nodeB, for effective 5G DRX. We present a novel hybrid directional-DRX (HD-DRX) mechanism, where beam searching is performed only when necessary. Probabilistic estimate of power saving and delay is conducted by capturing various states of UE through a semi-Markov process. Our numerical analysis achieves 13% improvement in power saving for HD-DRX compared with directional-DRX. We validate our numerical analysis with simulation studies on real traffic trace.",
"title": ""
}
] |
scidocsrr
|
9211ef9fcaf2db71d26a4a97d4bd7d49
|
Exploring Alternatives during Requirements Analysis
|
[
{
"docid": "221cd488d735c194e07722b1d9b3ee2a",
"text": "HURTS HELPS HURTS HELPS Data Type [Target System] Implicit HELPS HURTS HURTS BREAKS ? Invocation [Target System] Pipe & HELPS BREAKS BREAKS HELPS Filter WHEN [Target condl System] condl: size of data in domain is huge Figure 13.4. A generic Correlation Catalogue, based on [Garlan93]. Figure 13.3 shows a method which decomposes the topic on process, including algorithms as used in [Garlan93]. Decomposition methods for processes are also described in [Nixon93, 94a, 97a], drawing on implementations of processes [Chung84, 88]. These two method definitions are unparameterized. A fuller catalogue would include parameterized definitions too. Operationalization methods, which organize knowledge about satisficing NFR softgoals, are embedded in architectural designs when selected. For example, an ImplicitFunctionlnvocationRegime (based on [Garlan93]' architecture 3) can be used to hide implementation details in order to make an architectural 358 NON-FUNCTIONAL REQUIREMENTS IN SOFTWARE ENGINEERING design more extensible, thus contributing to one of the softgoals in the above decomposition. Argumentation methods and templates are used to organize principles and guidelines for making design rationale for or against design decisions (Cf. [J. Lee91]).",
"title": ""
}
] |
[
{
"docid": "3dbfc7699790f642eba296188ded0b94",
"text": "The stream of words produced by Automatic Speech Recognition (ASR) systems is devoid of any punctuations and formatting. Most natural language processing applications usually expect segmented and well-formatted texts as input, which is not available in ASR output. This paper proposes a novel technique of jointly modelling multiple correlated tasks such as punctuation and capitalization using bidirectional recurrent neural networks, which leads to improved performance for each of these tasks. This method can be extended for joint modelling of any other correlated multiple sequence labelling tasks.",
"title": ""
},
{
"docid": "5f63f65789e46b2eb9b9e853aba9bd72",
"text": "The cost of rare earth (RE) permanent magnet along with the associated supply volatility have intensified the interests for machine topologies which eliminate or reduce the RE magnets usage. This paper presents one such design solution, the separately excited synchronous motor (SESM) which eliminates RE magnets, however, but does not sacrifice the peak torque and power of the motor. The major drawback of such motors is the necessity of brushes to supply the field current. This is especially a challenge for hybrid or electric vehicle applications where the machine is actively cooled with oil inside the transmission. Sealing the brushes from the oil is challenging and would limit the application of such motor inside a transmission. To overcome this problem, a contactless rotary transformer is designed and implemented for the rotor field excitation. The designed motor is built and tested. The test data show that the designed motor outperforms an equivalent interior permanent magnet (IPM) motor, which is optimized for a hybrid application, for both peak torque and power. Better drive system efficiency is measured at high speed compared to the IPM machine, while the later outperforms (for efficiency) the SESM at low and medium speed range.",
"title": ""
},
{
"docid": "b41a64f09b640e8c20c602878abf1996",
"text": "Electronic Health Records (EHRs) are entirely controlled by hospitals instead of patients, which complicates seeking medical advices from different hospitals. Patients face a critical need to focus on the details of their own healthcare and restore management of their own medical data. The rapid development of blockchain technology promotes population healthcare, including medical records as well as patient-related data. This technology provides patients with comprehensive, immutable records, and access to EHRs free from service providers and treatment websites. In this paper, to guarantee the validity of EHRs encapsulated in blockchain, we present an attribute-based signature scheme with multiple authorities, in which a patient endorses a message according to the attribute while disclosing no information other than the evidence that he has attested to it. Furthermore, there are multiple authorities without a trusted single or central one to generate and distribute public/private keys of the patient, which avoids the escrow problem and conforms to the mode of distributed data storage in the blockchain. By sharing the secret pseudorandom function seeds among authorities, this protocol resists collusion attack out of $N$ from $N-1$ corrupted authorities. Under the assumption of the computational bilinear Diffie-Hellman, we also formally demonstrate that, in terms of the unforgeability and perfect privacy of the attribute-signer, this attribute-based signature scheme is secure in the random oracle model. The comparison shows the efficiency and properties between the proposed method and methods proposed in other studies.",
"title": ""
},
{
"docid": "6ed26bfb94b03c262fe6173a5baaf8f7",
"text": "The main goal of a persuasion dialogue is to persuade, but agents may have a number of additional goals concerning the dialogue duration, how much and what information is shared or how aggressive the agent is. Several criteria have been proposed in the literature covering different aspects of what may matter to an agent, but it is not clear how to combine these criteria that are often incommensurable and partial. This paper is inspired by multi-attribute decision theory and considers argument selection as decision-making where multiple criteria matter. A meta-level argumentation system is proposed to argue about what argument an agent should select in a given persuasion dialogue. The criteria and sub-criteria that matter to an agent are structured hierarchically into a value tree and meta-level argument schemes are formalized that use a value tree to justify what argument the agent should select. In this way, incommensurable and partial criteria can be combined.",
"title": ""
},
{
"docid": "01ddd5cf694df46a69341549f70529f8",
"text": "The RiskTrack project aims to help in the prevention of terrorism through the identification of online radicalisation. In line with the European Union priorities in this matter, this project has been designed to identify and tackle the indicators that raise a red flag about which individuals or communities are being radicalised and recruited to commit violent acts of terrorism. Therefore, the main goals of this project will be twofold: On the one hand, it is needed to identify the main features and characteristics that can be used to evaluate a risk situation, to do that a risk assessment methodology studying how to detect signs of radicalisation (e.g., use of language, behavioural patterns in social networks...) will be designed. On the other hand, these features will be tested and analysed using advanced data mining methods, knowledge representation (semantic and ontology engineering) and multilingual technologies. The innovative aspect of this project is to not offer just a methodology on risk assessment, but also a tool that is build based on this methodology, so that the prosecutors, judges, law enforcement and other actors can obtain a short term tangible results.",
"title": ""
},
{
"docid": "acddf623a4db29f60351f41eb8d0b113",
"text": "In an age where people are becoming increasing likely to trust information found through online media, journalists have begun employing techniques to lure readers to articles by using catchy headlines, called clickbait. These headlines entice the user into clicking through the article whilst not providing information relevant to the headline itself. Previous methods of detecting clickbait have explored techniques heavily dependent on feature engineering, with little experimentation having been tried with neural network architectures. We introduce a novel model combining recurrent neural networks, attention layers and image embeddings. Our model uses a combination of distributed word embeddings derived from unannotated corpora, character level embeddings calculated through Convolutional Neural Networks. These representations are passed through a bidirectional LSTM with an attention layer. The image embeddings are also learnt from large data using CNNs. Experimental results show that our model achieves an F1 score of 65.37% beating the previous benchmark of 55.21%.",
"title": ""
},
{
"docid": "2e77e22bb82c5546c2c14b83fe55fdce",
"text": "Metric learning is n fundamental problem in computer vision. Different features and algorithms may tackle a problem from different angles, and thus often provide complementary information. In this paper; we propose a fusion algorithm which outputs enhanced metrics by combining multiple given metrics (similarity measures). Unlike traditional co-training style algorithms where multi-view features or multiple data subsets are used for classification or regression, we focus on fusing multiple given metrics through diffusion process in an unsupervised way. Our algorithm has its particular advantage when the input similarity' matrices are the outputs from diverse algorithms. We provide both theoretical and empirical explanations to our method. Significant improvements over the state-of-the-art results have been observed on various benchmark datasets. For example, we have achieved 100% accuracy (no longer the bull's eye measure) on the MPEG-7 shape dataset. Our method has a wide range of applications in machine learning and computer vision.",
"title": ""
},
{
"docid": "e4a3065209c9dde50267358cbe6829b7",
"text": "OBJECTIVES\nWith the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents.\n\n\nMETHODS\nThis paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain.\n\n\nRESULTS\nText mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail.\n\n\nCONCLUSIONS\nText mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.",
"title": ""
},
{
"docid": "8fadd358e6e6d3258cf1cb4e7c48a75f",
"text": "In the past decades, significant progresses have been achieved in genetic engineering of nucleases. Among the genetically engineered nucleases, zinc finger nucleases, transcription activator-like (TAL) effector nucleases, and CRIPSPR/Cas9 system form a new field of gene editing. The gene editing efficiency or targeting effect and the off-target effect are the two major determinant factors in evaluating the usefulness of a new enzyme. Engineering strategies in improving these gene editing enzymes, particularly in minimizing their off-target effects, are the focus of this paper. Examples of using these genetically engineered enzymes in genome modification are discussed in order to better understand the requirement of engineering efforts in obtaining more powerful and useful gene editing enzymes. In addition, the identification of naturally existed anti-Cas proteins has been employed in minimizing off-target effects. Considering the future application in human gene therapy, optimization of these well recognized gene editing enzymes and exploration of more novel enzymes are both required. Before people find an ideal gene editing system having virtually no off-target effect, technologies used to screen and identify off-target effects are of importance in clinical trials employing gene therapy.",
"title": ""
},
{
"docid": "2c1de0ee482b3563c6b0b49bfdbbe508",
"text": "The paper summarizes our research in the area of unsupervised categorization of Wikipedia articles. As a practical result of our research, we present an application of spectral clustering algorithm used for grouping Wikipedia search results. The main contribution of the paper is a representation method for Wikipedia articles that has been based on combination of words and links and used for categoriation of search results in this repository. We evaluate the proposed approach with Primary Component projections and show, on the test data, how usage of cosine transformation to create combined representations influence data variability. On sample test datasets, we also show how combined representation improves the data separation that increases overall results of data categorization. To implement the system, we review the main spectral clustering methods and we test their usability for text categorization. We give a brief description of the system architecture that groups online Wikipedia articles retrieved with user-specified keywords. Using the system, we show how clustering increases information retrieval effectiveness for Wikipedia data repository.",
"title": ""
},
{
"docid": "64d72ffe736831266acde9726d6d039f",
"text": "Recently, image caption which aims to generate a textual description for an image automatically has attracted researchers from various fields. Encouraging performance has been achieved by applying deep neural networks. Most of these works aim at generating a single caption which may be incomprehensive, especially for complex images. This paper proposes a topic-specific multi-caption generator, which infer topics from image first and then generate a variety of topic-specific captions, each of which depicts the image from a particular topic. We perform experiments on flickr8k, flickr30k and MSCOCO. The results show that the proposed model performs better than single-caption generator when generating topic-specific captions. The proposed model effectively generates diversity of captions under reasonable topics and they differ from each other in topic level.",
"title": ""
},
{
"docid": "2809ddea92ebdcaf0d832b2d793d6f02",
"text": "Many e-commerce websites use recommender systems to recommend items to users. When a user or item is new, the system may fail because not enough information is available on this user or item. Various solutions to this ‘cold-start problem’ have been proposed in the literature. However, many real-life e-commerce applications suffer from an aggravated, recurring version of cold-start even for known users or items, since many users visit the website rarely, change their interests over time, or exhibit different personas. This paper exposes the Continuous Cold Start (CoCoS) problem and its consequences for contentand context-based recommendation from the viewpoint of typical e-commerce applications, illustrated with examples from a major travel recommendation website, Booking.com. Terms: CoCoS: continuous cold start",
"title": ""
},
{
"docid": "260527c2cd3c7942ccd2d57a77d64780",
"text": "Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Datacentric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-toend routing schemes. We examine the impact of sourcedestination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases.",
"title": ""
},
{
"docid": "f90eebfcf87285efe711968c85f04d1b",
"text": "Fouling is generally defined as the accumulation and formation of unwanted materials on the surfaces of processing equipment, which can seriously deteriorate the capacity of the surface to transfer heat under the temperature difference conditions for which it was designed. Fouling of heat transfer surfaces is one of the most important problems in heat transfer equipment. Fouling is an extremely complex phenomenon. Fundamentally, fouling may be characterized as a combined, unsteady state, momentum, mass and heat transfer problem with chemical, solubility, corrosion and biological processes may also taking place. It has been described as the major unresolved problem in heat transfer1. According to many [1-3], fouling can occur on any fluid-solid surface and have other adverse effects besides reduction of heat transfer. It has been recognized as a nearly universal problem in design and operation, and it affects the operation of equipment in two ways: Firstly, the fouling layer has a low thermal conductivity. This increases the resistance to heat transfer and reduces the effectiveness of heat exchangers. Secondly, as deposition occurs, the cross sectional area is reduced, which causes an increase in pressure drop across the apparatus. In industry, fouling of heat transfer surfaces has always been a recognized phenomenon, although poorly understood. Fouling of heat transfer surfaces occurs in most chemical and process industries, including oil refineries, pulp and paper manufacturing, polymer and fiber production, desalination, food processing, dairy industries, power generation and energy recovery. By many, fouling is considered the single most unknown factor in the design of heat exchangers. This situation exists despite the wealth of operating experience accumulated over the years and accumulation of the fouling literature. This lake of understanding almost reflects the complex nature of the phenomena by which fouling occurs in industrial equipment. The wide range of the process streams and operating conditions present in industry tends to make most fouling situations unique, thus rendering a general analysis of the problem difficult. In general, the ability to transfer heat efficiently remains a central feature of many industrial processes. As a consequence much attention has been paid to improving the understanding of heat transfer mechanisms and the development of suitable correlations and techniques that may be applied to the design of heat exchangers. On the other hand relatively little consideration has been given to the problem of surface fouling in heat exchangers. The",
"title": ""
},
{
"docid": "bc3f64571ac833049e95994c675df26a",
"text": "Effective Poisson–Nernst–Planck (PNP) equations are derived for ion transport in charged porous media under forced convection (periodic flow in the frame of the mean velocity) by an asymptotic multiscale expansion with drift. The homogenized equations provide a modeling framework for engineering while also addressing fundamental questions about electrodiffusion in charged porous media, relating to electroneutrality, tortuosity, ambipolar diffusion, Einstein’s relation, and hydrodynamic dispersion. The microscopic setting is a two-component periodic composite consisting of a dilute electrolyte continuum (described by standard PNP equations) and a continuous dielectric matrix, which is impermeable to the ions and carries a given surface charge. As a first approximation for forced convection, the electrostatic body force on the fluid and electro-osmotic flows are neglected. Four new features arise in the upscaled equations: (i) the effective ionic diffusivities and mobilities become tensors, related to the microstructure; (ii) the effective permittivity is also a tensor, depending on the electrolyte/matrix permittivity ratio and the ratio of the Debye screening length to the macroscopic length of the porous medium; (iii) the microscopic convection leads to a diffusion-dispersion correction in the effective diffusion tensor; and (iv) the surface charge per volume appears as a continuous “background charge density,” as in classical membrane models. The coefficient tensors in the upscaled PNP equations can be calculated from periodic reference cell problems. For an insulating solid matrix, all gradients are corrected by the same tensor, and the Einstein relation holds at the macroscopic scale, which is not generally the case for a polarizable matrix, unless the permittivity and electric field are suitably defined. In the limit of thin double layers, Poisson’s equation is replaced by macroscopic electroneutrality (balancing ionic and surface charges). The general form of the macroscopic PNP equations may also hold for concentrated solution theories, based on the local-density and mean-field approximations. These results have broad applicability to ion transport in porous electrodes, separators, membranes, ion-exchange resins, soils, porous rocks, and biological tissues.",
"title": ""
},
{
"docid": "f971c7374e75fc82896db4b8a4a8a999",
"text": "Body image disturbance and body dysmorphic disorder (BDD) have been researched from a variety of psychological approaches. Psychological inflexibility, or avoidance of one's own cognitive and affective states at a cost to personal values, may be a useful construct to understand these problems. In an effort to clarify the role of psychological inflexibility in body image disturbance and BDD, a measure was created based on the principles of Acceptance and Commitment Therapy (ACT). The scale was developed by generating new items to represent the construct and revising items from an existing scale measuring aspects of body image psychological inflexibility. The study was conducted with an ethnically diverse undergraduate population using three samples during the validation process. Participants completed multiple assessments to determine the validity of the measure and were interviewed for BDD. The 16-item scale has internal consistency (α = 0.93), a single factor solution, convergent validity, and test re-test reliability (r = 0.90). Data demonstrate a relationship between psychological inflexibility and body image disturbance indicating empirical support for an ACT conceptualization of body image problems and the use of this measure for body image disturbance and BDD.",
"title": ""
},
{
"docid": "a521520be5a12db159b8d5ae7eff14bf",
"text": "Robust vision-based grasping is still a hard problem for humanoid robot systems. When being restricted to using the camera system built-in into the robot's head for object localization, the scenarios get often very simplified in order to allow the robot to grasp autonomously. Within the computer vision community, many object recognition and localization systems exist, but in general, they are not tailored to the application on a humanoid robot. In particular, accurate 6D object localization in the camera coordinate system with respect to a 3D rigid model is crucial for a general framework for grasping. While many approaches try to avoid the use of stereo calibration, we will present a system that makes explicit use of the stereo camera system in order to achieve maximum depth accuracy. Our system can deal with textured objects as well as objects that can be segmented globally and are defined by their shape. Thus, it covers the cases of objects with complex texture and complex shape. Our work is directly linked to a grasping framework being implemented on the humanoid robot ARM AR and serves as its perception module for various grasping and manipulation experiments in a kitchen scenario.",
"title": ""
},
{
"docid": "6b5d153443e204bdf9a97d74a0be8adb",
"text": "It is difficult to manually identify opportunities for enhancing data locality. To address this problem, we extended the HPCToolkit performance tools to support data-centric profiling of scalable parallel programs. Our tool uses hardware counters to directly measure memory access latency and attributes latency metrics to both variables and instructions. Different hardware counters provide insight into different aspects of data locality (or lack thereof). Unlike prior tools for data-centric analysis, our tool employs scalable measurement, analysis, and presentation methods that enable it to analyze the memory access behavior of scalable parallel programs with low runtime and space overhead. We demonstrate the utility of HPCToolkit's new data-centric analysis capabilities with case studies of five well-known benchmarks. In each benchmark, we identify performance bottlenecks caused by poor data locality and demonstrate non-trivial performance optimizations enabled by this guidance.",
"title": ""
},
{
"docid": "4e399f32e868434d19341a504d0c472c",
"text": "Hair abnormalities observed in epidermolysis bullosa (EB) are of variable severity and include mild hair shaft abnormalities, patchy cicatricial alopecia, cicatricial alopecia with a male pattern distribution, and alopecia universalis. Alopecia is usually secondary to blistering, and scalp areas more exposed to friction, such as the occipital area, are involved more frequently. This article reviews the hair abnormalities reported in the different subtypes of EB.",
"title": ""
},
{
"docid": "224a2739ade3dd64e474f5c516db89a7",
"text": "Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.",
"title": ""
}
] |
scidocsrr
|
f50e3b8115dbf4cd36c9fb8551a18c3d
|
Self-compassion and intuitive eating in college women: examining the contributions of distress tolerance and body image acceptance and action.
|
[
{
"docid": "cbf878cd5fbf898bdf88a2fcf5024826",
"text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.",
"title": ""
}
] |
[
{
"docid": "e468f230bc26197908e9ef869c3b9734",
"text": "As the level of human interaction with robotic systems increases, robot mobility becomes more important. Two wheeled robots offer higher levels of mobility and manoeuvrability when compared to their four wheeled counterparts with the ability to turn on the spot and easily negotiate tight corners. Whilst the stabilisation of two wheeled platforms has been well studied, there is no published research on alternative actuation methods. This study proposes the implementation of a reaction wheel actuator to balance a two wheeled platform within small angular deviations from its equilibrium position. It is proposed that the use of a reaction wheel to deliver the balancing torque instead of the platform drive wheels will lower energy consumption of the system. This hypothesis was derived from the idea that there are fewer energy losses in delivering the torque from the reaction wheel in comparison to the platform wheels. After the design and construction of the platform, standardised tests were carried out to make energy consumption comparisons between the reaction wheel actuated (hybrid) system and the traditional baseline system. The results from these experiments showed that the hybrid system consumed approximately 21% less energy than the baseline system and therefore proves the feasibility of adding a reaction wheel actuator to the system. Index Terms ---Reaction wheel, robotics, state-space control, linear quadratic regulator, energy efficient balancing.",
"title": ""
},
{
"docid": "deccfbca102068be749a231405aca30e",
"text": " Case report.. We present a case of 28-year-old female patient with condylomata gigantea (Buschke-Lowenstein tumor) in anal and perianal region with propagation on vulva and vagina. The local surgical excision and CO2 laser treatment were performed. Histological examination showed presence of HPV type 11 without malignant potential. Result.. Three months later, there was no recurrence.",
"title": ""
},
{
"docid": "9565a8f48b23d34c4cb4e55084e965c3",
"text": "This article reviews several classes of compliant materials that can be utilized to fabricate electronic muscles and skins. Different classes of materials range from compliant conductors, semiconductors, to dielectrics, all of which play a vital and cohesive role in the development of next generation electronics. This paper covers recent advances in the development of new materials, as well as the engineering of well-characterized materials for the repurposing in applications of flexible and stretchable electronics. In addition to compliant materials, this article further discusses the use of these materials for integrated systems to develop soft sensors and actuators. These new materials and new devices pave the way for a new generation of electronics that will change the way we see and interact with our devices for decades to come.",
"title": ""
},
{
"docid": "f1074bd860436efbab451afadd9c2262",
"text": "G lobal climate is now rapidly changing, with consequent geographic rearrangement of species and recent climate-related extinctions (Root et al. 2003; Pounds et al. 2006). Yet protected areas (including national parks, nature reserves, and multiple-use conservation areas) are still the mainstay of modern conservation efforts (Rodrigues et al. 2004). Protected areas are geographically fixed and increasingly isolated by habitat destruction, and are therefore poorly suited to accommodating species range shifts due to climate change (Peters and Myers 1991). Here, we ask the question: are protected areas a relevant conservation response in an era of rapid climate change? Evaluating the effectiveness of protected areas is a problem in conservation planning that is made more complicated by climate change. A major goal of systematic conservation planning is to ensure that all species are represented within the protected areas of a given geographic region (Margules and Pressey 2000). Completing an existing protected area system in a given region so that it represents all known species generally proceeds by assessing the species already protected and then systematically adding complementary areas until all species are represented. Multiple representations of populations or species occurrences are usually necessary to ensure the conservation of each species, so for large numbers of species the process can be quite complex. For this reason, computer-automated selection routines, known as \" reserve selection algorithms \" , have been developed (Pressey and Cowling 2001). The problem is more complex when species' ranges become dynamic as the result of climate change. One approach is to couple species distribution models and reserve selection algorithms (Araújo et al. 2004; Williams et al. 2005). Species distribution models use statistical or heuristic packages that simulate the present range of a species, based on relationships between known points of species' occurrence and climate at the time those points were recorded. A simulated present range is required because no species' distribution is perfectly known, while a simulation of future range is needed to account for the range shift likely to accompany changing climatic conditions. When such modeled ranges are available for large numbers of species (ideally hundreds or thousands), a reserve selection algorithm can be used to design a protected-areas system that represents all species, both in the present and in the future. This is most easily done by starting with existing protected areas and adding additional areas to complete species representation. One possible goal for such a process in …",
"title": ""
},
{
"docid": "7113e007073184671d0bf5c9bdda1f5c",
"text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2d623b5e25fefd68f30c6e0d3365fa83",
"text": "In real-world environments, human speech is usually distorted by both reverberation and background noise, which have negative effects on speech intelligibility and speech quality. They also cause performance degradation in many speech technology applications, such as automatic speech recognition. Therefore, the dereverberation and denoising problems must be dealt with in daily listening environments. In this paper, we propose to perform speech dereverberation using supervised learning, and the supervised approach is then extended to address both dereverberation and denoising. Deep neural networks are trained to directly learn a spectral mapping from the magnitude spectrogram of corrupted speech to that of clean speech. The proposed approach substantially attenuates the distortion caused by reverberation, as well as background noise, and is conceptually simple. Systematic experiments show that the proposed approach leads to significant improvements of predicted speech intelligibility and quality, as well as automatic speech recognition in reverberant noisy conditions. Comparisons show that our approach substantially outperforms related methods.",
"title": ""
},
{
"docid": "abba569b4c799c95f5ba05047aac8e40",
"text": "We tackle the problem of single image depth estimation, which, without additional knowledge, suffers from many ambiguities. Unlike previous approaches that only reason locally, we propose to exploit the global structure of the scene to estimate its depth. To this end, we introduce a hierarchical representation of the scene, which models local depth jointly with mid-level and global scene structures. We formulate single image depth estimation as inference in a graphical model whose edges let us encode the interactions within and across the different layers of our hierarchy. Our method therefore still produces detailed depth estimates, but also leverages higher-level information about the scene. We demonstrate the benefits of our approach over local depth estimation methods on standard indoor datasets.",
"title": ""
},
{
"docid": "904d175ba1f94a980ceb88f9941f0a55",
"text": "Currently, wind turbines can incur unforeseen damage up to five times a year. Particularly during bad weather, wind turbines located offshore are difficult to access for visual inspection. As a result, long periods of turbine standstill can result in great economic inefficiencies that undermine the long-term viability of the technology. Hence, the load carrying structure should be monitored continuously in order to minimize the overall cost of maintenance and repair. The end result are turbines defined by extend lifetimes and greater economic viability. For that purpose, an automated monitoring system for early damage detection and damage localisation is currently under development for wind turbines. Most of the techniques existing for global damage detection of structures work by using frequency domain methods. Frequency shifts and mode shape changes are usually used for damage detection of large structures (e.g. bridges, large buildings and towers) [1]. Damage can cause a change in the distribution of structural stiffness which has to be detected by measuring dynamic responses using natural excitation. Even though mode shapes are more sensitive to damage compared to frequency shifts, the use of mode shapes requires a lot of sensors installed so as to reliably detect mode shape changes for early damage detection [2]. The design of our developed structural health monitoring (SHM) system is based on three functional modules that track changes in the global dynamic behaviour of both the turbine tower and blade elements. A key feature of the approach is the need for a minimal number of strain gages and accelerometers necessary to record the structure’s condition. Module 1 analyzes the proportionality of maximum stress and maximum velocity; already small changes in component stiffness can be detected. Afterwards, module 3 is activated for localization and quantization of the damage. The approach of module 3 is based on a numerical model which solves a multi-parameter eigenvalue problem. As a prerequisite, highly resolved eigenfrequencies and a parameterization of a validated structural model are required. Both are provided for the undamaged structure by module 2",
"title": ""
},
{
"docid": "be90932dfddcf02b33fc2ef573b8c910",
"text": "Style-based Text Categorization: What Newspaper Am I Reading?",
"title": ""
},
{
"docid": "ce35a38f1ab8264554ca19fbe8017b82",
"text": "Since the BOSS competition, in 2010, most steganalysis approaches use a learning methodology involving two steps: feature extraction, such as the Rich Models (RM), for the image representation, and use of the Ensemble Classifier (EC) for the learning step. In 2015, Qian et al. have shown that the use of a deep learning approach that jointly learns and computes the features, was very promising for the steganalysis. In this paper, we follow-up the study of Qian et al., and show that in the scenario where the steganograph always uses the same embedding key for embedding with the simulator in the different images, due to intrinsic joint minimization and the preservation of spatial information, the results obtained from a Convolutional Neural Network (CNN) or a Fully Connected Neural Network (FNN), if well parameterized, surpass the conventional use of a RM with an EC. First, numerous experiments were conducted in order to find the best ”shape” of the CNN. Second, experiments were carried out in the clairvoyant scenario in order to compare the CNN and FNN to an RM with an EC. The results show more than 16% reduction in the classification error with our CNN or FNN. Third, experiments were also performed in a cover-source mismatch setting. The results show that the CNN and FNN are naturally robust to the mismatch problem. In Addition to the experiments, we provide discussions on the internal mechanisms of a CNN, and weave links with some previously stated ideas, in order to understand the results we obtained. We also have a discussion on the scenario ”same embedding key”.",
"title": ""
},
{
"docid": "a734d59544fd17d6991b71c5f4b8bdf6",
"text": "Transgenic cotton that produced one or more insecticidal proteins of Bacillus thuringiensis (Bt) was planted on over 15 million hectares in 11 countries in 2009 and has contributed to a reduction of over 140 million kilograms of insecticide active ingredient between 1996 and 2008. As a highly selective form of host plant resistance, Bt cotton effectively controls a number of key lepidopteran pests and has become a cornerstone in overall integrated pest management (IPM). Bt cotton has led to large reductions in the abundance of targeted pests and benefited non-Bt cotton adopters and even producers of other crops affected by polyphagous target pests. Reductions in insecticide use have enhanced biological control, which has contributed to significant suppression of other key and sporadic pests in cotton. Although reductions in insecticide use in some regions have elevated the importance of several pest groups, most of these emerging problems can be effectively solved through an IPM approach.",
"title": ""
},
{
"docid": "7138c13d88d87df02c7dbab4c63328c4",
"text": "Banisteriopsis caapi is the basic ingredient of ayahuasca, a psychotropic plant tea used in the Amazon for ritual and medicinal purposes, and by interested individuals worldwide. Animal studies and recent clinical research suggests that B. caapi preparations show antidepressant activity, a therapeutic effect that has been linked to hippocampal neurogenesis. Here we report that harmine, tetrahydroharmine and harmaline, the three main alkaloids present in B. caapi, and the harmine metabolite harmol, stimulate adult neurogenesis in vitro. In neurospheres prepared from progenitor cells obtained from the subventricular and the subgranular zones of adult mice brains, all compounds stimulated neural stem cell proliferation, migration, and differentiation into adult neurons. These findings suggest that modulation of brain plasticity could be a major contribution to the antidepressant effects of ayahuasca. They also expand the potential application of B. caapi alkaloids to other brain disorders that may benefit from stimulation of endogenous neural precursor niches.",
"title": ""
},
{
"docid": "f38be49e258eef45e40b808f2b7bde94",
"text": "Scalability, fast response time and low cost are of utmost importance in designing a successful massively multiplayer online game. The underlying architecture plays an important role in meeting these conditions. Peer-to-peer architectures, have low infrastructure costs and can achieve high scalability, due to their distributed and collaborative nature. They can also achieve fast response times by creating direct connections between players. However, these architectures face many challenges. Therefore, the paper investigates existing peer to peer architecture solutions for a massively multiplayer online games. The study examines two hybrid architectures. In the first one, a supernode approach is used with a central server. In the contrast in the second one, there is no central server and pure peer to peer architecture is deployed. Moreover, the thesis proposes a solution based on multicast peer discovery and supernodes for a massively multiplayer online game. Also, all system is covered with simulation, that provides results for future analysing.",
"title": ""
},
{
"docid": "fa8f0b1bfd42ce3b45ec842a4693a49d",
"text": "BACKGROUND\nEvidence is growing that sleep problems in adolescents are significant impediments to learning and negatively affect behaviour, attainment of social competence and quality of life. The objectives of the study were to determine the level of sleepiness among students in high school, to identify factors to explain it, and to determine the association between sleepiness and performance in both academic and extracurricular activities\n\n\nMETHODS\nA cross-sectional survey of 2201 high school students in the Hamilton Wentworth District School Board and the Near North District School Board in Ontario was conducted in 1998/9. A similar survey was done three years later involving 1034 students in the Grand Erie District School Board in the same Province. The Epworth Sleepiness Scale (ESS) was used to measure sleepiness and we also assessed the reliability of this tool for this population. Descriptive analysis of the cohort and information on various measures of performance and demographic data were included. Regression analysis, using the generalised estimating equation (GEE), was utilized to investigate factors associated with risk of sleepiness (ESS>10).\n\n\nRESULTS\nSeventy per cent of the students had less than 8.5 hours weeknight sleep. Bedtime habits such as a consistent bedtime routine, staying up late or drinking caffeinated beverages before bed were statistically significantly associated with ESS, as were weeknight sleep quantity and gender. As ESS increased there was an increase in the proportion of students who felt their grades had dropped because of sleepiness, were late for school, were often extremely sleepy at school, and were involved in fewer extracurricular activities. These performance measures were statistically significantly associated with ESS. Twenty-three percent of the students felt their grades had dropped because of sleepiness. Most students (58-68%) reported that they were \"really sleepy\" between 8 and 10 A.M.\n\n\nCONCLUSION\nSleep deprivation and excessive daytime sleepiness were common in two samples of Ontario high school students and were associated with a decrease in academic achievement and extracurricular activity. There is a need to increase awareness of this problem in the education and health communities and to translate knowledge already available to strategies to address it.",
"title": ""
},
{
"docid": "3b300b9275b6da1aff685e5ca9b71252",
"text": "This paper presents an algorithm developed based on hidden Markov model for cues fusion and event inference in soccer video. Four events, shoot, foul, offside and normal playing, are defined to be detected. The states of the events are employed to model the observations of the five cues, which are extracted from the shot sequences directly. The experimental results show the algorithm is effective and robust in inferring events from roughly extracted cues.",
"title": ""
},
{
"docid": "f6f014f88f0958db650c7d21f06813e1",
"text": "Nowadays, huge amount of data and information are available for everyone, Data can now be stored in many different kinds of databases and information repositories, besides being available on the Internet or in printed form. With such amount of data, there is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. In order to reveal the best tools for dealing with the classification task that helps in decision making, this paper has conducted a comparative study between a number of some of the free available data mining and knowledge discovery tools and software packages. Results have showed that the performance of the tools for the classification task is affected by the kind of dataset used and by the way the classification algorithms were implemented within the toolkits. For the applicability issue, the WEKA toolkit has achieved the highest applicability followed by Orange, Tanagra, and KNIME respectively. Finally; WEKA toolkit has achieved the highest improvement in classification performance; when moving from the percentage split test mode to the Cross Validation test mode, followed by Orange, KNIME and finally Tanagra respectively. Keywords-component; data mining tools; data classification; Wekak; Orange; Tanagra; KNIME.",
"title": ""
},
{
"docid": "78ce4abc08e6c6a3ef0800accd0b8c4b",
"text": "For the first time, 20nm DRAM has been developed and fabricated successfully without extreme ultraviolet (EUV) lithography using the honeycomb structure (HCS) and the air-spacer technology. The cell capacitance (Cs) can be increased by 21% at the same cell size using a novel low-cost HCS technology with one argon fluoride immersion (ArF-i) lithography layer. The parasitic bit-line (BL) capacitance is reduced by 34% using an air-spacer technology whose breakdown voltage is 30% better than that of conventional technology.",
"title": ""
},
{
"docid": "5e7c2e90fd340c544480bf65df91fca4",
"text": "Gestational gigantomastia is a rare condition characterized by fast, disproportionate and excessive breast growth, decreased quality of life in pregnancy, and presence of psychologic as well as physical complications. The etiology is not fully understood, although hormonal changes in pregnancy are considered responsible. Prolactin is the most important hormone. To date, 125 cases of gigantomastia have been reported in the literature. In this case presentation, we report a pregnant woman aged 26 years with a 22-week gestational age with gestational gigantomastia and review the diagnosis and treatment of this rare disease in relation with the literature.",
"title": ""
}
] |
scidocsrr
|
74739d108fffe78ecd352affb8952f47
|
Twitter adoption, students perceptions, Big Five personality traits and learning outcome: Lessons learned from 3 case studies
|
[
{
"docid": "76cedf5536bd886b5838c2a5e027de79",
"text": "This article reports a meta-analysis of personality-academic performance relationships, based on the 5-factor model, in which cumulative sample sizes ranged to over 70,000. Most analyzed studies came from the tertiary level of education, but there were similar aggregate samples from secondary and tertiary education. There was a comparatively smaller sample derived from studies at the primary level. Academic performance was found to correlate significantly with Agreeableness, Conscientiousness, and Openness. Where tested, correlations between Conscientiousness and academic performance were largely independent of intelligence. When secondary academic performance was controlled for, Conscientiousness added as much to the prediction of tertiary academic performance as did intelligence. Strong evidence was found for moderators of correlations. Academic level (primary, secondary, or tertiary), average age of participant, and the interaction between academic level and age significantly moderated correlations with academic performance. Possible explanations for these moderator effects are discussed, and recommendations for future research are provided.",
"title": ""
},
{
"docid": "03368de546daf96d5111325f3d08fd3d",
"text": "Despite the widespread use of social media by students and its increased use by instructors, very little empirical evidence is available concerning the impact of social media use on student learning and engagement. This paper describes our semester-long experimental study to determine if using Twitter – the microblogging and social networking platform most amenable to ongoing, public dialogue – for educationally relevant purposes can impact college student engagement and grades. A total of 125 students taking a first year seminar course for pre-health professional majors participated in this study (70 in the experimental group and 55 in the control group). With the experimental group, Twitter was used for various types of academic and co-curricular discussions. Engagement was quantified by using a 19-item scale based on the National Survey of Student Engagement. To assess differences in engagement and grades, we used mixed effects analysis of variance (ANOVA) models, with class sections nested within treatment groups. We also conducted content analyses of samples of Twitter exchanges. The ANOVA results showed that the experimental group had a significantly greater increase in engagement than the control group, as well as higher semester grade point averages. Analyses of Twitter communications showed that students and faculty were both highly engaged in the learning process in ways that transcended traditional classroom activities. This study provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilize faculty into a more active and participatory role.",
"title": ""
}
] |
[
{
"docid": "4ba984e616a972bd85dbced47733c2db",
"text": "Class-instance label propagation algorithms have been successfully used to fuse information from multiple sources in order to enrich a set of unlabeled instances with class labels. Yet, nobody has explored the relationships between the instances themselves to enhance an initial set of class-instance pairs. We propose two graph-theoretic methods (centrality and regularization), which start with a small set of labeled class-instance pairs and use the instance-instance network to extend the class labels to all instances in the network. We carry out a comparative study with state-of-the-art knowledge harvesting algorithm and show that our approach can learn additional class labels while maintaining high accuracy. We conduct a comparative study between class-instance and instance-instance graphs used to propagate the class labels and show that the latter one achieves higher accuracy.",
"title": ""
},
{
"docid": "3d0b50111f6c9168b8a269a7d99d8fbc",
"text": "Detecting lies is crucial in many areas, such as airport security, police investigations, counter-terrorism, etc. One technique to detect lies is through the identification of facial micro-expressions, which are brief, involuntary expressions shown on the face of humans when they are trying to conceal or repress emotions. Manual measurement of micro-expressions is hard labor, time consuming, and inaccurate. This paper presents the Design and Development of a Lie Detection System using Facial Micro-Expressions. It is an automated vision system designed and implemented using LabVIEW. An Embedded Vision System (EVS) is used to capture the subject's interview. Then, a LabVIEW program converts the video into series of frames and processes the frames, each at a time, in four consecutive stages. The first two stages deal with color conversion and filtering. The third stage applies geometric-based dynamic templates on each frame to specify key features of the facial structure. The fourth stage extracts the needed measurements in order to detect facial micro-expressions to determine whether the subject is lying or not. Testing results show that this system can be used for interpreting eight facial expressions: happiness, sadness, joy, anger, fear, surprise, disgust, and contempt, and detecting facial micro-expressions. It extracts accurate output that can be employed in other fields of studies such as psychological assessment. The results indicate high precision that allows future development of applications that respond to spontaneous facial expressions in real time.",
"title": ""
},
{
"docid": "8c52c67dde20ce0a50ea22aaa4f917a5",
"text": "This paper presents the vision of the Artificial Vision and Intelligent Systems Laboratory (VisLab) on future automated vehicles, ranging from sensor selection up to their extensive testing. VisLab's design choices are explained using the BRAiVE autonomous vehicle prototype as an example. BRAiVE, which is specifically designed to develop, test, and demonstrate advanced safety applications with different automation levels, features a high integration level and a low-cost sensor suite, which are mainly based on vision, as opposed to many other autonomous vehicle implementations based on expensive and invasive sensors. The importance of performing extensive tests to validate the design choices is considered to be a hard requirement, and different tests have been organized, including an intercontinental trip from Italy to China. This paper also presents the test, the main challenges, and the vehicles that have been specifically developed for this test, which was performed by four autonomous vehicles based on BRAiVE's architecture. This paper also includes final remarks on VisLab's perspective on future vehicles' sensor suite.",
"title": ""
},
{
"docid": "a1b7f477c339f30587a2f767327b4b41",
"text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.",
"title": ""
},
{
"docid": "a42f7e9efc4c0e2d56107397f98b15f1",
"text": "Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at each time step. The guiding network can be plugged into the current encoder-decoder framework and trained in an end-to-end manner. Hence, the guiding vector can be adaptively learned according to the signal from the decoder, making itself to embed information from both image and language. Additionally, discriminative supervision can be employed to further improve the quality of guidance. The advantages of our proposed approach are verified by experiments carried out on the MS COCO dataset.",
"title": ""
},
{
"docid": "553d7f8c6b4c04349b65379e1e6cb0d8",
"text": "Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.",
"title": ""
},
{
"docid": "b9ff1346b9eafed6e78c13d893054dac",
"text": "Imagine a robot is shown new concepts visually together with spoken tags, e.g. “milk”, “eggs”, “butter”. After seeing one paired audiovisual example per class, it is shown a new set of unseen instances of these objects, and asked to pick the “milk”. Without receiving any hard labels, could it learn to match the new continuous speech input to the correct visual instance? Although unimodal one-shot learning has been studied, where one labelled example in a single modality is given per class, this example motivates multimodal oneshot learning. Our main contribution is to formally define this task, and to propose several baseline and advanced models. We use a dataset of paired spoken and visual digits to specifically investigate recent advances in Siamese convolutional neural networks. Our best Siamese model achieves twice the accuracy of a nearest neighbour model using pixel-distance over images and dynamic time warping over speech in 11-way cross-modal matching.",
"title": ""
},
{
"docid": "576091bb08f9a37e0be8c38294e155e3",
"text": "This research will demonstrate hacking techniques on the modern automotive network and describe the design and implementation of a benchtop simulator. In currently-produced vehicles, the primary network is based on the Controller Area Network (CAN) bus described in the ISO 11898 family of protocols. The CAN bus performs well in the electronically noisy environment found in the modern automobile. While the CAN bus is ideal for the exchange of information in this environment, when the protocol was designed security was not a priority due to the presumed isolation of the network. That assumption has been invalidated by recent, well-publicized attacks where hackers were able to remotely control an automobile, leading to a product recall that affected more than a million vehicles. The automobile has a multitude of electronic control units (ECUs) which are interconnected with the CAN bus to control the various systems which include the infotainment, light, and engine systems. The CAN bus allows the ECUs to share information along a common bus which has led to improvements in fuel and emission efficiency, but has also introduced vulnerabilities by giving access on the same network to cyber-physical systems (CPS). These CPS systems include the anti-lock braking systems (ABS) and on late model vehicles the ability to turn the steering wheel and control the accelerator. Testing functionality on an operational vehicle can be dangerous and place others in harm's way, but simulating the vehicle network and functionality of the ECUs on a bench-top system provides a safe way to test for vulnerabilities and to test possible security solutions to prevent CPS access over the CAN bus network. This paper will describe current research on the automotive network, provide techniques in capturing network traffic for playback, and demonstrate the design and implementation of a benchtop system for continued research on the CAN bus.",
"title": ""
},
{
"docid": "e872173252bf7b516183d3e733c36f6c",
"text": "Nonlinear autoregressive moving average with exogenous inputs (NARMAX) models have been successfully demonstrated for modeling the input-output behavior of many complex systems. This paper deals with the proposition of a scheme to provide time series prediction. The approach is based on a recurrent NARX model obtained by linear combination of a recurrent neural network (RNN) output and the real data output. Some prediction metrics are also proposed to assess the quality of predictions. This metrics enable to compare different prediction schemes and provide an objective way to measure how changes in training or prediction model (Neural network architecture) affect the quality of predictions. Results show that the proposed NARX approach consistently outperforms the prediction obtained by the RNN neural network.",
"title": ""
},
{
"docid": "249a09e24ce502efb4669603b54b433d",
"text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7",
"title": ""
},
{
"docid": "acecf40720fd293972555918878b805e",
"text": "This article outlines a number of important research issues in human-computer interaction in the e-commerce environment. It highlights some of the challenges faced by users in browsing Web sites and conducting searches for information, and suggests several areas of research for promoting ease of navigation and search. Also, it discusses the importance of trust in the online environment, describing some of the antecedents and consequences of trust, and provides guidelines for integrating trust into Web site design. The issues discussed in this article are presented under three broad categories of human-computer interaction – Web usability, interface design, and trust – and are intended to highlight what we believe are worthwhile areas for future research in e-commerce.",
"title": ""
},
{
"docid": "a6a7770857964e96f98bd4021d38f59f",
"text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.",
"title": ""
},
{
"docid": "ec8684e227bf63ac2314ce3cb17e2e8b",
"text": "Musical genre classification is the automatic classification of audio signals into user defined labels describing pieces of music. A problem inherent to genre classification experiments in music information retrieval research is the use of songs from the same artist in both training and test sets. We show that this does not only lead to overoptimistic accuracy results but also selectively favours particular classification approaches. The advantage of using models of songs rather than models of genres vanishes when applying an artist filter. The same holds true for the use of spectral features versus fluctuation patterns for preprocessing of the audio files.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "4edbbbad1353fc6e2df4f3f7afae44ac",
"text": "Few-shot learning refers to understanding new concepts from only a few examples. We propose an information retrieval-inspired approach for this problem that is motivated by the increased importance of maximally leveraging all the available information in this low-data regime. We define a training objective that aims to extract as much information as possible from each training batch by effectively optimizing over all relative orderings of the batch points simultaneously. In particular, we view each batch point as a ‘query’ that ranks the remaining ones based on its predicted relevance to them and we define a model within the framework of structured prediction to optimize mean Average Precision over these rankings. Our method achieves impressive results on the standard few-shot classification benchmarks while is also capable of few-shot retrieval.",
"title": ""
},
{
"docid": "81d71ff745ad21eda90eecabf4e400c2",
"text": "This paper describes a new paradigm for stock trading involving the use of classical feedback controllers which are “model free” in that they use neither parameterization nor estimation of stock price dynamics. At time t, the control signal is the investment level I(t), obtained via a mapping on the so-called gain-loss function g(t). While such strategies fall under the umbrella of technical analysis, our approach differs from the literature in a fundamental way: Whereas existing work in finance involves statistical analysis via historical back-testing, our new control-theoretic paradigm aims to provide “certification theorems” giving conditions under which certain robustness properties are guaranteed with respect to benchmark classes for the time-varying stock price p(t). We demonstrate our ideas using a linear feedback implementation of a new stock-trading scheme called Simultaneous Long-Short. The analysis is carried out in a so-called idealized frictionless market first using smooth prices for pedagogical purposes and then using a more realistic benchmark involving Geometric Brownian Motion. Finally, simulations are given which include real-world implementation issues.",
"title": ""
},
{
"docid": "670ad989fb45d87b898aafe571bac3a9",
"text": "As an emerging technology to support scalable content-based image retrieval (CBIR), hashing has recently received great attention and became a very active research domain. In this study, we propose a novel unsupervised visual hashing approach called semantic-assisted visual hashing (SAVH). Distinguished from semi-supervised and supervised visual hashing, its core idea is to effectively extract the rich semantics latently embedded in auxiliary texts of images to boost the effectiveness of visual hashing without any explicit semantic labels. To achieve the target, a unified unsupervised framework is developed to learn hash codes by simultaneously preserving visual similarities of images, integrating the semantic assistance from auxiliary texts on modeling high-order relationships of inter-images, and characterizing the correlations between images and shared topics. Our performance study on three publicly available image collections: Wiki, MIR Flickr, and NUS-WIDE indicates that SAVH can achieve superior performance over several state-of-the-art techniques.",
"title": ""
},
{
"docid": "2d7ff73a3fb435bd11633f650b23172e",
"text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.",
"title": ""
},
{
"docid": "ee31719bce1b770e5347b7aa3189d94a",
"text": "Signature-based intrusion detection systems use a set of attack descriptions to analyze event streams, looking for evidence of malicious behavior. If the signatures are expressed in a well-defined language, it is possible to analyze the attack signatures and automatically generate events or series of events that conform to the attack descriptions. This approach has been used in tools whose goal is to force intrusion detection systems to generate a large number of detection alerts. The resulting “alert storm” is used to desensitize intrusion detection system administrators and hide attacks in the event stream. We apply a similar technique to perform testing of intrusion detection systems. Signatures from one intrusion detection system are used as input to an event stream generator that produces randomized synthetic events that match the input signatures. The resulting event stream is then fed to a number of different intrusion detection systems and the results are analyzed. This paper presents the general testing approach and describes the first prototype of a tool, called Mucus, that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system. The paper describes preliminary cross-testing experiments with both an open-source and a commercial tool and reports the results. An evasion attack that was discovered as a result of analyzing the test results is also presented.",
"title": ""
},
{
"docid": "d8056ee6b9d1eed4bc25e302c737780c",
"text": "This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for Web pages independent of their textual content, solely based on the hyperlink structure of the Web. PageRank is typically used as a Web Search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much more complex challenge. Recently, significant effort has been invested in building sets of personalized PageRank vectors. PageRank is also used in many diverse applications other than ranking. Below we are interested in the theoretical foundations of the PageRank formulation, in accelerating of PageRank computing, in the effects of particular aspects of Web graph structure on optimal organization of computations, and in PageRank stability. We also review alternative models that lead to authority indices similar to PageRank and the role of such indices in applications other than Web Search. We also discuss link-based search personalization and outline some aspects of PageRank infrastructure from associated measures of convergence to link preprocessing. Content",
"title": ""
}
] |
scidocsrr
|
745568b8252c41c0b7d6ec6300f3b976
|
Incentivizing the dissemination of truth versus fake news in social networks
|
[
{
"docid": "bf9910e87c2294e307f142e0be4ed4f6",
"text": "The rapidly developing cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute applications remotely. A mobile device should judiciously decide whether to offload computation and which portion of application should be offloaded to the cloud. In this paper, we consider a mobile cloud computing (MCC) interaction system consisting of multiple mobile devices and the cloud computing facilities. We provide a nested two stage game formulation for the MCC interaction system. In the first stage, each mobile device determines the portion of its service requests for remote processing in the cloud. In the second stage, the cloud computing facilities allocate a portion of its total resources for service request processing depending on the request arrival rate from all the mobile devices. The objective of each mobile device is to minimize its power consumption as well as the service request response time. The objective of the cloud computing controller is to maximize its own profit. Based on the backward induction principle, we derive the optimal or near-optimal strategy for all the mobile devices as well as the cloud computing controller in the nested two stage game using convex optimization technique. Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system. The mobile devices can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9%, respectively, compared with baseline methods.",
"title": ""
},
{
"docid": "b84e816e6c8b8777d67d67dc76f73e2b",
"text": "An increasing fraction of today's social interactions occur using online social media as communication channels. Recent worldwide events, such as social movements in Spain or revolts in the Middle East, highlight their capacity to boost people's coordination. Online networks display in general a rich internal structure where users can choose among different types and intensity of interactions. Despite this, there are still open questions regarding the social value of online interactions. For example, the existence of users with millions of online friends sheds doubts on the relevance of these relations. In this work, we focus on Twitter, one of the most popular online social networks, and find that the network formed by the basic type of connections is organized in groups. The activity of the users conforms to the landscape determined by such groups. Furthermore, Twitter's distinction between different types of interactions allows us to establish a parallelism between online and offline social networks: personal interactions are more likely to occur on internal links to the groups (the weakness of strong ties); events transmitting new information go preferentially through links connecting different groups (the strength of weak ties) or even more through links connecting to users belonging to several groups that act as brokers (the strength of intermediary ties).",
"title": ""
}
] |
[
{
"docid": "2b7f3b4d099d447f6fd5dc13d75fa44d",
"text": "Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.",
"title": ""
},
{
"docid": "050679bfbeba42b30f19f1a824ec518a",
"text": "Principles of cognitive science hold the promise of helping children to study more effectively, yet they do not always make successful transitions from the laboratory to applied settings and have rarely been tested in such settings. For example, self-generation of answers to questions should help children to remember. But what if children cannot generate anything? And what if they make an error? Do these deviations from the laboratory norm of perfect generation hurt, and, if so, do they hurt enough that one should, in practice, spurn generation? Can feedback compensate, or are errors catastrophic? The studies reviewed here address three interlocking questions in an effort to better implement a computer-based study program to help children learn: (1) Does generation help? (2) Do errors hurt if they are corrected? And (3) what is the effect of feedback? The answers to these questions are: Yes, generation helps; no, surprisingly, errors that are corrected do not hurt; and, finally, feedback is beneficial in verbal learning. These answers may help put cognitive scientists in a better position to put their well-established principles in the service of children's learning.",
"title": ""
},
{
"docid": "d922dbcdd2fb86e7582a4fb78990990e",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "a7b6a491d85ae94285808a21dbc65ce9",
"text": "In imbalanced learning, most standard classification algorithms usually fail to properly represent data distribution and provide unfavorable classification performance. More specifically, the decision rule of minority class is usually weaker than majority class, leading to many misclassification of expensive minority class data. Motivated by our previous work ADASYN [1], this paper presents a novel kernel based adaptive synthetic over-sampling approach, named KernelADASYN, for imbalanced data classification problems. The idea is to construct an adaptive over-sampling distribution to generate synthetic minority class data. The adaptive over-sampling distribution is first estimated with kernel density estimation methods and is further weighted by the difficulty level for different minority class data. The classification performance of our proposed adaptive over-sampling approach is evaluated on several real-life benchmarks, specifically on medical and healthcare applications. The experimental results show the competitive classification performance for many real-life imbalanced data classification problems.",
"title": ""
},
{
"docid": "3f5b90fae38890515d312ed3753509ce",
"text": "Brand personality has been shown to affect a variety of user behaviors such as individual preferences and social interactions. Despite intensive research efforts in human personality assessment, little is known about brand personality and its relationship with social media. Leveraging the theory in marketing, we analyze how brand personality associates with its contributing factors embodied in social media. Based on the analysis of over 10K survey responses and a large corpus of social media data from 219 brands, we quantify the relative importance of factors driving brand personality. The brand personality model developed with social media data achieves predicted R values as high as 0.67. We conclude by illustrating how modeling brand personality can help users find brands suiting their personal characteristics and help companies manage brand perceptions.",
"title": ""
},
{
"docid": "6b2f0de5307b0d0aa1e658a2539bc741",
"text": "Intensive care unit patients are heavily monitored, and several clinically-relevant parameters are routinely extracted from high resolution signals.\n\n\nOBJECTIVE\nThe goal of the 2016 PhysioNet/CinC Challenge was to encourage the creation of an intelligent system that fused information from different phonocardiographic signals to create a robust set of normal/abnormal signal detections.\n\n\nAPPROACH\nDeep convolutional neural networks and mel-frequency spectral coefficients were used for recognition of normal-abnormal phonocardiographic signals of the human heart. This technique was developed using the PhysioNet.org Heart Sound database and was submitted for scoring on the challenge test set.\n\n\nMAIN RESULTS\nThe current entry for the proposed approach obtained an overall score of 84.15% in the last phase of the challenge, which provided the sixth official score and differs from the best score of 86.02% by just 1.87%.",
"title": ""
},
{
"docid": "246f56b1b5aa4f095c6dd281a670210f",
"text": "The Allen Brain Atlas (http://www.brain-map.org) provides a unique online public resource integrating extensive gene expression data, connectivity data and neuroanatomical information with powerful search and viewing tools for the adult and developing brain in mouse, human and non-human primate. Here, we review the resources available at the Allen Brain Atlas, describing each product and data type [such as in situ hybridization (ISH) and supporting histology, microarray, RNA sequencing, reference atlases, projection mapping and magnetic resonance imaging]. In addition, standardized and unique features in the web applications are described that enable users to search and mine the various data sets. Features include both simple and sophisticated methods for gene searches, colorimetric and fluorescent ISH image viewers, graphical displays of ISH, microarray and RNA sequencing data, Brain Explorer software for 3D navigation of anatomy and gene expression, and an interactive reference atlas viewer. In addition, cross data set searches enable users to query multiple Allen Brain Atlas data sets simultaneously. All of the Allen Brain Atlas resources can be accessed through the Allen Brain Atlas data portal.",
"title": ""
},
{
"docid": "a7c9de856a94cae710681fba8fb49979",
"text": "This paper discusses psychological safety and distinguishes it from the related construct of interpersonal trust. Trust is the expectation that others' future actions will be favorable to one's interests; psychological safety refers to a climate in which people are comfortable being (and expressing) themselves. Although both constructs involve a willingness to be vulnerable to others' actions, they are conceptually and theoretically distinct. In particular, psychological safety is centrally tied to learning behavior, while trust lowers transactions costs and reduces the need to monitor behavior. This paper proposes a model of antecedents and consequences of psychological safety in work teams and emphasizes the centrality of psychological safety for learning behavior. Drawing from field research in a variety of organizational settings, I describe different approaches to studying and measuring psychological safety in teams. I conclude with implications of this work including limitations of psychological safety in practice and suggestions areas for future research.",
"title": ""
},
{
"docid": "98f75a69417bc3eb16d13e1dc39f1001",
"text": "This paper provides a comprehensive overview of critical developments in the field of multiple-input multiple-output (MIMO) wireless communication systems. The state of the art in single-user MIMO (SU-MIMO) and multiuser MIMO (MU-MIMO) communications is presented, highlighting the key aspects of these technologies. Both open-loop and closed-loop SU-MIMO systems are discussed in this paper with particular emphasis on the data rate maximization aspect of MIMO. A detailed review of various MU-MIMO uplink and downlink techniques then follows, clarifying the underlying concepts and emphasizing the importance of MU-MIMO in cellular communication systems. This paper also touches upon the topic of MU-MIMO capacity as well as the promising convex optimization approaches to MIMO system design.",
"title": ""
},
{
"docid": "e64f1f11ed113ca91094ef36eaf794a7",
"text": "We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multicore machines. In order to be as hardwareagnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.",
"title": ""
},
{
"docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22",
"text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.",
"title": ""
},
{
"docid": "08aa54980d7664ea6fc57aad1dd0029e",
"text": "Visual surveillance of dynamic objects, particularly vehicles on the road, has been, over the past decade, an active research topic in computer vision and intelligent transportation systems communities. In the context of traffic monitoring, important advances have been achieved in environment modeling, vehicle detection, tracking, and behavior analysis. This paper is a survey that addresses particularly the issues related to vehicle monitoring with cameras at road intersections. In fact, the latter has variable architectures and represents a critical area in traffic. Accidents at intersections are extremely dangerous, and most of them are caused by drivers' errors. Several projects have been carried out to enhance the safety of drivers in the special context of intersections. In this paper, we provide an overview of vehicle perception systems at road intersections and representative related data sets. The reader is then given an introductory overview of general vision-based vehicle monitoring approaches. Subsequently and above all, we present a review of studies related to vehicle detection and tracking in intersection-like scenarios. Regarding intersection monitoring, we distinguish and compare roadside (pole-mounted, stationary) and in-vehicle (mobile platforms) systems. Then, we focus on camera-based roadside monitoring systems, with special attention to omnidirectional setups. Finally, we present possible research directions that are likely to improve the performance of vehicle detection and tracking at intersections.",
"title": ""
},
{
"docid": "4bb9186954536103422ef662dc7459bf",
"text": "Cantilevered beams with piezoceramic layers have been frequently used as piezoelectric vibration energy harvesters in the past five years. The literature includes several single degree-of-freedom models, a few approximate distributed parameter models and even some incorrect approaches for predicting the electromechanical behavior of these harvesters. In this paper, we present the exact analytical solution of a cantilevered piezoelectric energy harvester with Euler–Bernoulli beam assumptions. The excitation of the harvester is assumed to be due to its base motion in the form of translation in the transverse direction with small rotation, and it is not restricted to be harmonic in time. The resulting expressions for the coupled mechanical response and the electrical outputs are then reduced for the particular case of harmonic behavior in time and closed-form exact expressions are obtained. Simple expressions for the coupled mechanical response, voltage, current, and power outputs are also presented for excitations around the modal frequencies. Finally, the model proposed is used in a parametric case study for a unimorph harvester, and important characteristics of the coupled distributed parameter system, such as short circuit and open circuit behaviors, are investigated in detail. Modal electromechanical coupling and dependence of the electrical outputs on the locations of the electrodes are also discussed with examples. DOI: 10.1115/1.2890402",
"title": ""
},
{
"docid": "c467fe65c242436822fd72113b99c033",
"text": "Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is a powerful technique for generating striking images of vector data. Based on local ltering of an input texture along a curved stream line segment in a vector eld, it is possible to depict directional information of the vector eld at pixel resolution. The methods suggested so far can handle structured grids only. Now we present an approach that works both on two-dimensional unstructured grids and directly on triangulated surfaces in three-dimensional space. Because unstructured meshes often occur in real applications, this feature makes LIC available for a number of new applications.",
"title": ""
},
{
"docid": "0b5468a808315325b40ab8107e756824",
"text": "EtherCAT real-time Ethernet technology that proposed by Beckhoff Company, is now widely used in industrial automation and motion control fields. In this paper, slave module design based on EtherCAT fieldbus is carried out from the view of the theoretical study and product application as well.",
"title": ""
},
{
"docid": "7a1f409eea5e0ff89b51fe0a26d6db8d",
"text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "024cc15c164656f90ade55bf3c391405",
"text": "Unmanned aerial vehicles (UAVs), also known as drones have many applications and they are a current trend across many industries. They can be used for delivery, sports, surveillance, professional photography, cinematography, military combat, natural disaster assistance, security, and the list grows every day. Programming opens an avenue to automate many processes of daily life and with the drone as aerial programmable eyes, security and surveillance can become more efficient and cost effective. At Barry University, parking is becoming an issue as the number of people visiting the school greatly outnumbers the convenient parking locations. This has caused a multitude of hazards in parking lots due to people illegally parking, as well as unregistered vehicles parking in reserved areas. In this paper, we explain how automated drone surveillance is utilized to detect unauthorized parking at Barry University. The automated process is incorporated into Java application and completed in three steps: collecting visual data, processing data automatically, and sending automated responses and queues to the operator of the system.",
"title": ""
},
{
"docid": "b24772af47f76db0f19ee281cccaa03f",
"text": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.",
"title": ""
}
] |
scidocsrr
|
0222b9a2543e809aa4a56a2bbf174f35
|
Practical Defenses for Evil Twin Attacks in 802.11
|
[
{
"docid": "aa3da820fe9e98cb4f817f6a196c18e7",
"text": "Location awareness is an important capability for mobile computing. Yet inexpensive, pervasive positioning—a requirement for wide-scale adoption of location-aware computing—has been elusive. We demonstrate a radio beacon-based approach to location, called Place Lab, that can overcome the lack of ubiquity and high-cost found in existing location sensing approaches. Using Place Lab, commodity laptops, PDAs and cell phones estimate their position by listening for the cell IDs of fixed radio beacons, such as wireless access points, and referencing the beacons’ positions in a cached database. We present experimental results showing that 802.11 and GSM beacons are sufficiently pervasive in the greater Seattle area to achieve 20-40 meter median accuracy with nearly 100% coverage measured by availability in people’s daily",
"title": ""
},
{
"docid": "a7369f56c65cab977584854f2f701a73",
"text": "Wireless networking is widespread in public places such as cafes. Unsuspecting users may become victims of attacks based on \"evil twin\" access points. These rogue access points are operated by criminals in an attempt to launch man-in-the-middle attacks. We present a simple protection mechanism against binding to an evil twin. The mechanism leverages short authentication string protocols for the exchange of cryptographic keys. The short string verification is performed by encoding the short strings as a sequence of colors, rendered sequentially by the user's device and by the designated access point of the cafe. The access point must have a light capable of showing two colors and must be mounted prominently in a position where users can have confidence in its authenticity. We conducted a usability study with patrons in several cafes and participants found our mechanism very usable.",
"title": ""
},
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] |
[
{
"docid": "76d2ba510927bd7f56155e1cf1cbbc52",
"text": "As the first part of a study that aims to propose tools to take into account some electromagnetic compatibility aspects, we have developed a model to predict the electric and magnetic fields emitted by a device. This model is based on a set of equivalent sources (electric and magnetic dipoles) obtained from the cartographies of the tangential components of electric and magnetic near fields. One of its features is to be suitable for a commercial electromagnetic simulation tool based on a finite element method. This paper presents the process of modeling and the measurement and calibration procedure to obtain electromagnetic fields necessary for the model; the validation and the integration of the model into a commercial electromagnetic simulator are then performed on a Wilkinson power divider.",
"title": ""
},
{
"docid": "299deaffdd1a494fc754b9e940ad7f81",
"text": "In this work, we study an important problem: learning programs from input-output examples. We propose a novel method to learn a neural program operating a domain-specific non-differentiable machine, and demonstrate that this method can be applied to learn programs that are significantly more complex than the ones synthesized before: programming language parsers from input-output pairs without knowing the underlying grammar. The main challenge is to train the neural program without supervision on execution traces. To tackle it, we propose: (1) LL machines and neural programs operating them to effectively regularize the space of the learned programs; and (2) a two-phase reinforcement learning-based search technique to train the model. Our evaluation demonstrates that our approach can successfully learn to parse programs in both an imperative language and a functional language, and achieve 100% test accuracy, while existing approaches’ accuracies are almost 0%. This is the first successful demonstration of applying reinforcement learning to train a neural program operating a non-differentiable machine that can fully generalize to test sets on a non-trivial task.",
"title": ""
},
{
"docid": "f094754a454233cc8992f11e9dcb8bc9",
"text": "This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptualdriven methods to compete alongside algorithms that target PSNR maximization. Twenty-one participating teams introduced algorithms which well-improved upon the existing state-of-the-art methods in perceptual SR, as confirmed by a human opinion study. We also analyze popular image quality measures and draw conclusions regarding which of them correlates best with human opinion scores. We conclude with an analysis of the current trends in perceptual SR, as reflected from the leading submissions.",
"title": ""
},
{
"docid": "8cc3af1b9bb2ed98130871c7d5bae23a",
"text": "BACKGROUND\nAnimal experiments have convincingly demonstrated that prenatal maternal stress affects pregnancy outcome and results in early programming of brain functions with permanent changes in neuroendocrine regulation and behaviour in offspring.\n\n\nAIM\nTo evaluate the existing evidence of comparable effects of prenatal stress on human pregnancy and child development.\n\n\nSTUDY DESIGN\nData sources used included a computerized literature search of PUBMED (1966-2001); Psychlit (1987-2001); and manual search of bibliographies of pertinent articles.\n\n\nRESULTS\nRecent well-controlled human studies indicate that pregnant women with high stress and anxiety levels are at increased risk for spontaneous abortion and preterm labour and for having a malformed or growth-retarded baby (reduced head circumference in particular). Evidence of long-term functional disorders after prenatal exposure to stress is limited, but retrospective studies and two prospective studies support the possibility of such effects. A comprehensive model of putative interrelationships between maternal, placental, and fetal factors is presented.\n\n\nCONCLUSIONS\nApart from the well-known negative effects of biomedical risks, maternal psychological factors may significantly contribute to pregnancy complications and unfavourable development of the (unborn) child. These problems might be reduced by specific stress reduction in high anxious pregnant women, although much more research is needed.",
"title": ""
},
{
"docid": "a51803d5c0753f64f5216d2cc225d172",
"text": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels.",
"title": ""
},
{
"docid": "16fa2d1a6964453e6425adf31bfbe453",
"text": "In the context of software platforms, we examine how cross-side network effects (CNEs) on different platform sides (application-side and user-side) are temporally asymmetric, and how these CNEs are influenced by the platform’s governance policies. Informed by a perspective of value creation and capture, we theorize how the app-side and the userside react to each other with distinct value creation/capture processes, and how these processes are influenced by the platform’s governance policies on app review and platform updates. We use a time-series analysis to empirically investigate the platform ecosystem of a leading web browser. Our findings suggest that while the growth in platform usage results in long-term growth in both the number and variety of apps, the growth in the number of apps and the variety of apps only leads to short-term growth in platform usage. We also find that long app review time weakens the long-term CNE of the user-side on the app-side, but not the short-term CNE of the app-side on the user-side. Moreover, we find that frequent platform updates weaken the CNEs of both the user-side and the app-side on each other. These findings generate important implications regarding how a software platform may better govern its ecosystem with different participants.",
"title": ""
},
{
"docid": "5e5681f0bc44eebce176a806d30c37c9",
"text": "Shilling attackers apply biased rating profiles to recommender systems for manipulating online product recommendations. Although many studies have been devoted to shilling attack detection, few of them can handle the hybrid shilling attacks that usually happen in practice, and the studies for real-life applications are rarely seen. Moreover, little attention has yet been paid to modeling both labeled and unlabeled user profiles, although there are often a few labeled but numerous unlabeled users available in practice. This paper presents a Hybrid Shilling Attack Detector, or HySAD for short, to tackle these problems. In particular, HySAD introduces MC-Relief to select effective detection metrics, and Semi-supervised Naive Bayes (SNB_lambda) to precisely separate Random-Filler model attackers and Average-Filler model attackers from normal users. Thorough experiments on MovieLens and Netflix datasets demonstrate the effectiveness of HySAD in detecting hybrid shilling attacks, and its robustness for various obfuscated strategies. A real-life case study on product reviews of Amazon.cn is also provided, which further demonstrates that HySAD can effectively improve the accuracy of a collaborative-filtering based recommender system, and provide interesting opportunities for in-depth analysis of attacker behaviors. These, in turn, justify the value of HySAD for real-world applications.",
"title": ""
},
{
"docid": "84e496ee1c111f6f81703bf41fbbf26b",
"text": "Spatial databases, addressing the growing data management and analysis needs of spatial applications such as Geographic Information Systems, have been an active area of research for more than two decades. This research has produced a taxonomy of models for space, spatial data types and operators, spatial query languages and processing strategies, as well as spatial indexes and clustering techniques. However, more research is needed to improve support for network and field data, as well as query processing (e.g., cost models, bulk load). Another important need is to apply spatial data management accomplishments to newer applications, such as data warehouses and multimedia information systems. The objective of this paper is to identify recent accomplishments and associated research needs of the near term.",
"title": ""
},
{
"docid": "8d40b29088a331578e502abb2148ea8c",
"text": "Governments are increasingly realizing the importance of utilizing Information and Communication Technologies (ICT) as a tool to better address user’s/citizen’s needs. As citizen’s expectations grow, governments need to deliver services of high quality level to motivate more users to utilize these available e-services. In spite of this, governments still fall short in their service quality level offered to citizens/users. Thus understanding and measuring service quality factors become crucial as the number of services offered is increasing while not realizing what citizens/users really look for when they utilize these services. The study presents an extensive literature review on approaches used to evaluate e-government services throughout a phase of time. The study also suggested those quality/factors indicators government’s need to invest in of high priority in order to meet current and future citizen’s expectations of service quality.",
"title": ""
},
{
"docid": "b08023089abd684d26fabefb038cc9fa",
"text": "IMSI catching is a problem on all generations of mobile telecommunication networks, i.e., 2G (GSM, GPRS), 3G (HDSPA, EDGE, UMTS) and 4G (LTE, LTE+). Currently, the SIM card of a mobile phone has to reveal its identity over an insecure plaintext transmission, before encryption is enabled. This identifier (the IMSI) can be intercepted by adversaries that mount a passive or active attack. Such identity exposure attacks are commonly referred to as 'IMSI catching'. Since the IMSI is uniquely identifying, unauthorized exposure can lead to various location privacy attacks. We propose a solution, which essentially replaces the IMSIs with changing pseudonyms that are only identifiable by the home network of the SIM's own network provider. Consequently, these pseudonyms are unlinkable by intermediate network providers and malicious adversaries, and therefore mitigate both passive and active attacks, which we also formally verified using ProVerif. Our solution is compatible with the current specifications of the mobile standards and therefore requires no change in the infrastructure or any of the already massively deployed network equipment. The proposed method only requires limited changes to the SIM and the authentication server, both of which are under control of the user's network provider. Therefore, any individual (virtual) provider that distributes SIM cards and controls its own authentication server can deploy a more privacy friendly mobile network that is resilient against IMSI catching attacks.",
"title": ""
},
{
"docid": "1da19f806430077f7ad957dbeb0cb8d1",
"text": "BACKGROUND\nTo date, periorbital melanosis is an ill-defined entity. The condition has been stated to be darkening of the skin around the eyes, dark circles, infraorbital darkening and so on.\n\n\nAIMS\nThis study was aimed at exploring the nature of pigmentation in periorbital melanosis.\n\n\nMETHODS\nOne hundred consecutive patients of periorbital melanosis were examined and investigated to define periorbital melanosis. Extent of periorbital melanosis was determined by clinical examination. Wood's lamp examination was performed in all the patients to determine the depth of pigmentation. A 2-mm punch biopsy was carried out in 17 of 100 patients.\n\n\nRESULTS\nIn 92 (92%) patients periorbital melanosis was an extension of pigmentary demarcation line over the face (PDL-F).\n\n\nCONCLUSION\nPeriorbital melanosis and pigmentary demarcation line of the face are not two different conditions; rather they are two different manifestations of the same disease.",
"title": ""
},
{
"docid": "aa7111f31aac1efa9ff38d697c5dcf0b",
"text": "8 12. Loughrin, J. H., Manukian, A., Heath, R. R. & Tumlinson, J. H. Diurnal cycle of emission of induced volatile terpenoids by herbivore-injured cotton plants. J. Chem. Ecol. 21, 1217–1227 (1994). 13. Takabayashi, J., Dicke, M. & Posthumus, M. A. Variation in composition of predator-attracting allelochemicals emitted by herbivore-infested plants: Relative influence of plant and herbivore. 2, 1–6 (1991). 14. Du, Y.-J., Poppy, G. M. & Powell, W. Relative importance of semiochemicals from first and second trophic levels in host foraging behavior of Aphidius ervi. J. Chem. Ecol. 22, 1591–1605 (1996). 15. Lewis, W. J. & Takasu, K. Use of learned odours by a parasitic wasp in accordance with host and food needs. Nature 348, 635–636 (1990). 16. Tumlinson, J. H., Lewis, W. J. & Vet, L. E. M. How parasitic wasps find their hosts. Sci. Am. 268, 100– 106 (1993). 17. Bell, W. J., Kipp, L. R. & Collins, R. D. in Chemical Ecology of Insects 2 (eds Cardé, R. T. & Bell, W. J.) 105–154 (Chapman & Hall, New York, 1995). 18. Strand, M. R. & Obrycki, J. J. Host specificity of insect parasitoids and predators. BioScience 46, 422– 429 (1996). 19. Futuyma, D. J. & Moreno, G. The evolution of ecological specialization. Annu. Rev. Ecol. Syst. 19, 207– 233 (1988). 20. Thompson, J. N. The Coevolutionary Process (Univ. of Chicago Press, Chicago, 1994). 21. Röse, U. S. R., Manukian, A., Heath, R. R. & Tumlinson, J. H. Volatile semiochemicals released from undamaged cotton leaves: A systemic response of living plants to caterpillar damage. Plant Physiol. 111, 487–495 (1996). 22. Heath, R. R. & Manukian, A. J. Chem. Ecol. 20, 593–608 (1994).",
"title": ""
},
{
"docid": "0713b8668b5faf037b4553517151f9ab",
"text": "Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.",
"title": ""
},
{
"docid": "cc7aa8b5b581c3e1996189411ca09235",
"text": "Owing to a number of reasons, the deployment of encryption solutions are beginning to be ubiquitous at both organizational and individual levels. The most emphasized reason is the necessity to ensure confidentiality of privileged information. Unfortunately, it is also popular as cyber-criminals' escape route from the grasp of digital forensic investigations. The direct encryption of data or indirect encryption of storage devices, more often than not, prevents access to such information contained therein. This consequently leaves the forensics investigation team, and subsequently the prosecution, little or no evidence to work with, in sixty percent of such cases. However, it is unthinkable to jeopardize the successes brought by encryption technology to information security, in favour of digital forensics technology. This paper examines what data encryption contributes to information security, and then highlights its contributions to digital forensics of disk drives. The paper also discusses the available ways and tools, in digital forensics, to get around the problems constituted by encryption. A particular attention is paid to the Truecrypt encryption solution to illustrate ideas being discussed. It then compares encryption's contributions in both realms, to justify the need for introduction of new technologies to forensically defeat data encryption as the only solution, whilst maintaining the privacy goal of users. Keywords—Encryption; Information Security; Digital Forensics; Anti-Forensics; Cryptography; TrueCrypt",
"title": ""
},
{
"docid": "759bb2448f1d34d3742fec38f273135e",
"text": "Although below-knee prostheses have been commercially available for some time, today's devices are completely passive, and consequently, their mechanical properties remain fixed with walking speed and terrain. A lack of understanding of the ankle-foot biomechanics and the dynamic interaction between an amputee and a prosthesis is one of the main obstacles in the development of a biomimetic ankle-foot prosthesis. In this paper, we present a novel ankle-foot emulator system for the study of human walking biomechanics. The emulator system is comprised of a high performance, force-controllable, robotic ankle-foot worn by an amputee interfaced to a mobile computing unit secured around his waist. We show that the system is capable of mimicking normal ankle-foot walking behaviour. An initial pilot study supports the hypothesis that the emulator may provide a more natural gait than a conventional passive prosthesis",
"title": ""
},
{
"docid": "b4d7fccccd7a80631f1190320cfeab9e",
"text": "BACKGROUND\nPatients on surveillance for clinical stage I (CSI) testicular cancer are counseled regarding their baseline risk of relapse. The conditional risk of relapse (cRR), which provides prognostic information on patients who have survived for a period of time without relapse, have not been determined for CSI testicular cancer.\n\n\nOBJECTIVE\nTo determine cRR in CSI testicular cancer.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nWe reviewed 1239 patients with CSI testicular cancer managed with surveillance at a tertiary academic centre between 1980 and 2014. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: cRR estimates were calculated using the Kaplan-Meier method. We stratified patients according to validated risk factors for relapse. We used linear regression to determine cRR trends over time.\n\n\nRESULTS AND LIMITATIONS\nAt orchiectomy, the risk of relapse within 5 yr was 42.4%, 17.3%, 20.3%, and 12.2% among patients with high-risk nonseminomatous germ cell tumor (NSGCT), low-risk NSGCT, seminoma with tumor size ≥3cm, and seminoma with tumor size <3cm, respectively. However, for patients without relapse within the first 2 yr of follow-up, the corresponding risk of relapse within the next 5 yr in the groups was 0.0%, 1.0% (95% confidence interval [CI] 0.3-1.7%), 5.6% (95% CI 3.1-8.2%), and 3.9% (95% CI 1.4-6.4%). Over time, cRR decreased (p≤0.021) in all models. Limitations include changes to surveillance protocols over time and few late relapses.\n\n\nCONCLUSIONS\nAfter 2 yr, the risk of relapse on surveillance for CSI testicular cancer is very low. Consideration should be given to adapting surveillance protocols to individualized risk of relapse based on cRR as opposed to static protocols based on baseline factors. This strategy could reduce the intensity of follow-up for the majority of patients.\n\n\nPATIENT SUMMARY\nOur study is the first to provide data on the future risk of relapse during surveillance for clinical stage I testicular cancer, given a patient has been without relapse for a specified period of time.",
"title": ""
},
{
"docid": "316e4984bf6eef57a7f823b5303164f1",
"text": "Recent technical and infrastructural developments posit flipped (or inverted) classroom approaches ripe for exploration. Flipped classroom approaches have students use technology to access the lecture and other instructional resources outside the classroom in order to engage them in active learning during in-class time. Scholars and educators have reported a variety of outcomes of a flipped approach to instruction; however, the lack of a summary from these empirical studies prevents stakeholders from having a clear view of the benefits and challenges of this style of instruction. The purpose of this article is to provide a review of the flipped classroom approach in order to summarize the findings, to guide future studies, and to reflect the major achievements in the area of Computer Science (CS) education. 32 peer-reviewed articles were collected from a systematic literature search and analyzed based on a categorization of their main elements. The results of this survey show the direction of flipped classroom research during recent years and summarize the benefits and challenges of adopting a flipped approach in the classroom. Suggestions for future research include: describing in-detail the flipped approach; performing controlled experiments; and triangulating data from diverse sources. These future research efforts will reveal which aspects of a flipped classroom work better and under which circumstances and student groups. The findings will ultimately allow us to form best practices and a unified framework for guiding/assisting educators who want to adopt this teaching style.",
"title": ""
},
{
"docid": "be08b71c9af0e27f4f932919c2aaa24b",
"text": "Gamification is the \"use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A frequently used model for gamification is to equate an activity in the non-game context with points and have external rewards for reaching specified point thresholds. One significant problem with this model of gamification is that it can reduce the internal motivation that the user has for the activity, as it replaces internal motivation with external motivation. If, however, the game design elements can be made meaningful to the user through information, then internal motivation can be improved as there is less need to emphasize external rewards. This paper introduces the concept of meaningful gamification through a user-centered exploration of theories behind organismic integration theory, situational relevance, situated motivational affordance, universal design for learning, and player-generated content. A Brief Introduction to Gamification One definition of gamification is \"the use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A common implementation of gamification is to take the scoring elements of video games, such as points, levels, and achievements, and apply them to a work or educational context. While the term is relatively new, the concept has been around for some time through loyalty systems like frequent flyer miles, green stamps, and library summer reading programs. These gamification programs can increase the use of a service and change behavior, as users work toward meeting these goals to reach external rewards (Zichermann & Cunningham, 2011, p. 27). Gamification has met with significant criticism by those who study games. One problem is with the name. By putting the term \"game\" first, it implies that the entire activity will become an engaging experience, when, in reality, gamification typically uses only the least interesting part of a game the scoring system. The term \"pointsification\" has been suggested as a label for gamification systems that add nothing more than a scoring system to a non-game activity (Robertson, 2010). One definition of games is \"a form of play with goals and structure\" (Maroney, 2001); the points-based gamification focuses on the goals and leaves the play behind. Ian Bogost suggests the term be changed to \"exploitationware,\" as that is a better description of what is really going on (2011). The underlying message of these criticisms of gamification is that there are more effective ways than a scoring system to engage users. Another concern is that organizations getting involved with gamification are not aware of the potential long-term negative impact of gamification. Underlying the concept of gamification is motivation. People can be driven to do something because of internal or external motivation. A meta-analysis by Deci, Koestner, and Ryan of 128 studies that examined motivation in educational settings found that almost all forms of rewards (except for non-controlling verbal rewards) reduced internal motivation (2001). The implication of this is that once gamification is used to provide external motivation, the user's internal motivation decreases. If the organization starts using gamification based upon external rewards and then decides to stop the rewards program, that organization will be worse off than when it started as users will be less likely to return to the behavior without the external reward (Deci, Koestner & Ryan, 2001). In the book Gamification by Design, the authors claim that this belief in internal motivation over extrinsic rewards is unfounded, and gamification can be used for organizations to control the behavior of users by replacing those internal motivations with extrinsic rewards. They do admit, though, that \"once you start giving someone a reward, you have to keep her in that reward loop forever\" (Zichermann & Cunningham, 2011, p. 27). Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Further exploration of the meta-analysis of motivational literature in education found that if the task was already uninteresting, reward systems did not reduce internal motivation, as there was little internal motivation to start with. The authors concluded that \"the issue is how to facilitate people's understanding the importance of the activity to themselves and thus internalizing its regulation so they will be selfmotivated to perform it\" (2001, p. 15). The goal of this paper is to explore theories useful in user-centered gamification that is meaningful to the user and therefore does not depend upon external rewards. Organismic Integration Theory Organismic Integration Theory (OIT) is a sub-theory of self-determination theory out of the field of Education created by Deci and Ryan (2004). Self-determination theory is focused on what drives an individual to make choices without external influence. OIT explores how different types of external motivations can be integrated with the underlying activity into someone’s own sense of self. Rather than state that motivations are either internalized or not, this theory presents a continuum based upon how much external control is integrated along with the desire to perform the activity. If there is heavy external control provided with a reward, then aspects of that external control will be internalized as well, while if there is less external control that goes along with the adaptation of an activity, then the activity will be more self-regulated. External rewards unrelated to the activity are the least likely to be integrated, as the perception is that someone else is controlling the individual’s behavior. Rewards based upon gaining or losing status that tap into the ego create an introjected regulation of behavior, and while this can be intrinsically accepted, the controlling aspect of these rewards causes the loss of internal motivation. Allowing users to selfidentify with goals or groups that are meaningful is much more likely to produce autonomous, internalized behaviors, as the user is able to connect these goals to other values he or she already holds. A user who has fully integrated the activity along with his or her personal goals and needs is more likely to see the activity as positive than if there is external control integrated with the activity (Deci & Ryan, 2004). OIT speaks to the importance of creating a gamification system that is meaningful to the user, assuming that the goal of the system is to create long-term systemic change where the users feel positive about engaging in the non-game activity. On the other side, if too many external controls are integrated with the activity, the user can have negative feelings about engaging in the activity. To avoid negative feelings, the game-based elements of the activity need to be meaningful and rewarding without the need for external rewards. In order for these activities to be meaningful to a specific user, however, they have to be relevant to that user. Situational Relevance and Situated Motivational Affordance One of the key research areas in Library and Information Science has been about the concept of relevance as related to information retrieval. A user has an information need, and a relevant document is one that resolves some of that information need. The concept of relevance is important in determining the effectiveness of search tools and algorithms. Many research projects that have compared search tools looked at the same query posed to different systems, and then used judges to determine what was a \"relevant\" response to that query. This approach has been heavily critiqued, as there are many variables that affect if a user finds something relevant at that moment in his or her searching process. Schamber reviewed decades of research to find generalizable criteria that could be used to determine what is truly relevant to a query and came to the conclusion that the only way to know if something is relevant is to ask the user (1994). Two users with the same search query will have different information backgrounds, so that a document that is relevant for one user may not be relevant to another user. This concept of \"situational relevance\" is important when thinking about gamification. When someone else creates goals for a user, it is akin to an external judge deciding what is relevant to a query. Without involving the user, there is no way to know what goals are relevant to a user's background, interest, or needs. In a points-based gamification system, the goal of scoring points is less likely to be relevant to a user if the activity that the points measure is not relevant to that user. For example, in a hybrid automobile, the gamification systems revolve around conservation and the point system can reflect how much energy is being saved. If the concept of saving energy is relevant to a user, then a point system Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. based upon that concept will also be relevant to that user. If the user is not internally concerned with saving energy, then a gamification system based upon saving energy will not be relevant to that user. There may be other elements of the driving experience that are of interest to a user, so if each user can select what aspect of the driving experience is measured, more users will find the system to be relevant. By involving the user in the creation or customization of the gamification system, the user can select or create meaningful game elements and goals that fall in line with their own interests. A related theory out of Human-Computer Interaction that has been applied to gamification is “situated motivational affordance” (Deterding, 2011b). This model was designed to help gamification designers consider the context of each o",
"title": ""
},
{
"docid": "938f49e103d0153c82819becf96f126c",
"text": "Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.",
"title": ""
},
{
"docid": "6674467b8453946a6b09c6662d88b764",
"text": "This paper reports the first set of results from a comprehensive set of experiments to detect realistic insider threat instances in a real corporate database of computer usage activity. It focuses on the application of domain knowledge to provide starting points for further analysis. Domain knowledge is applied (1) to select appropriate features for use by structural anomaly detection algorithms, (2) to identify features indicative of activity known to be associated with insider threat, and (3) to model known or suspected instances of insider threat scenarios. We also introduce a visual language for specifying anomalies across different types of data, entities, baseline populations, and temporal ranges. Preliminary results of our experiments on two months of live data suggest that these methods are promising, with several experiments providing area under the curve scores close to 1.0 and lifts ranging from ×20 to ×30 over random.",
"title": ""
}
] |
scidocsrr
|
c90c9ed2ad0f3d5f8796279234b31c93
|
Data Security and Privacy for Outsourced Data in the Cloud
|
[
{
"docid": "a9ac82abcad5d4120a6c4d1ea8dacaee",
"text": "The advent of cloud computing has ushered in an era of mass data storage in remote servers. Remote data storage offers reduced data management overhead for data owners in a cost effective manner. Sensitive documents, however, need to be stored in encrypted format due to security concerns. But, encrypted storage makes it difficult to search on the stored documents. Therefore, this poses a major barrier towards selective retrieval of encrypted documents from the remote servers. Various protocols have been proposed for keyword search over encrypted data to address this issue. Most of the available protocols leak data access patterns due to efficiency reasons. Although, oblivious RAM based protocols can be used to hide data access patterns, such protocols are computationally intensive and do not scale well for real world datasets. In this paper, we introduce a novel attack that exploits data access pattern leakage to disclose significant amount of sensitive information using a modicum of prior knowledge. Our empirical analysis with a real world dataset shows that the proposed attack is able to disclose sensitive information with a very high accuracy. Additionally, we propose a simple technique to mitigate the risk against the proposed attack at the expense of a slight increment in computational resources and communication cost. Furthermore, our proposed mitigation technique is generic enough to be used in conjunction with any searchable encryption scheme that reveals data access pattern.",
"title": ""
},
{
"docid": "1600d4662fc5939c5f737756e2d3e823",
"text": "Predicate encryption is a new paradigm for public-key encryption that generalizes identity-based encryption and more. In predicate encryption, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK f corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known only for certain classes of predicates. We construct a scheme for predicates corresponding to the evaluation of inner products over ℤ N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulas, thresholds, and more. Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right.",
"title": ""
}
] |
[
{
"docid": "e1d9af5c12dbc4e747bd7cb219e706fb",
"text": "Web reviews have been intensively studied in argumentation-related tasks such as sentiment analysis. However, due to their focus on content-based features, many sentiment analysis approaches are effective only for reviews from those domains they have been specifically modeled for. This paper puts its focus on domain independence and asks whether a general model can be found for how people argue in web reviews. Our hypothesis is that people express their global sentiment on a topic with similar sequences of local sentiment independent of the domain. We model such sentiment flow robustly under uncertainty through abstraction. To test our hypothesis, we predict global sentiment based on sentiment flow. In systematic experiments, we improve over the domain independence of strong baselines. Our findings suggest that sentiment flow qualifies as a general model of web review argumentation.",
"title": ""
},
{
"docid": "5a777c011d7dbd82653b1b2d0f007607",
"text": "The Factored Language Model (FLM) is a flexible framework for incorporating various information sources, such as morphology and part-of-speech, into language modeling. FLMs have so far been successfully applied to tasks such as speech recognition and machine translation; it has the potential to be used in a wide variety of problems in estimating probability tables from sparse data. This tutorial serves as a comprehensive description of FLMs and related algorithms. We document the FLM functionalities as implemented in the SRI Language Modeling toolkit and provide an introductory walk-through using FLMs on an actual dataset. Our goal is to provide an easy-to-understand tutorial and reference for researchers interested in applying FLMs to their problems. Overview of the Tutorial We first describe the factored language model (Section 1) and generalized backoff (Section 2), two complementary techniques that attempt to improve statistical estimation (i.e., reduce parameter variance) in language models, and that also attempt to better describe the way in which language (and sequences of words) might be produced. Researchers familar with the algorithms behind FLMs may skip to Section 3, which describes the FLM programs and file formats in the publicly-available SRI Language Modeling (SRILM) toolkit.1 Section 4 is a step-by-step walkthrough with several FLM examples on a real language modeling dataset. This may be useful for beginning users of the FLMs. Finally, Section 5 discusses the problem of automatically tuning FLM parameters on real datasets and refers to existing software. This may be of interest to advanced users of FLMs.",
"title": ""
},
{
"docid": "7cfe3122c904953edf3fcd6c35a549de",
"text": "This paper studies the practical impact of the branching heuristics used in Propositional Satisfiability (SAT) algorithms, when applied to solving real-world instances of SAT. In addition, different SAT algorithms are experimentally evaluated. The main conclusion of this study is that even though branching heuristics are crucial for solving SAT, other aspects of the organization of SAT algorithms are also essential. Moreover, we provide empirical evidence that for practical instances of SAT, the search pruning techniques included in the most competitive SAT algorithms may be of more fundamental significance than branching heuristics.",
"title": ""
},
{
"docid": "df9d85417753465e489b327b83c4211d",
"text": "As an integral component of blind image deblurring, non-blind deconvolution removes image blur with a given blur kernel, which is essential but difficult due to the ill-posed nature of the inverse problem. The predominant approach is based on optimization subject to regularization functions that are either manually designed, or learned from examples. Existing learning based methods have shown superior restoration quality but are not practical enough due to their restricted model design. They solely focus on learning a prior and require to know the noise level for deconvolution. We address the gap between the optimizationbased and learning-based approaches by learning an optimizer. We propose a Recurrent Gradient Descent Network (RGDN) by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme. A parameterfree update unit is used to generate updates from the current estimates, based on a convolutional neural network. By training on diverse examples, the Recurrent Gradient Descent Network learns an implicit image prior and a universal update rule through recursive supervision. Extensive experiments on synthetic benchmarks and challenging real-world images demonstrate that the proposed method is effective and robust to produce favorable results as well as practical for realworld image deblurring applications.",
"title": ""
},
{
"docid": "13ac8eddda312bd4ef3ba194c076a6ea",
"text": "With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a novel dataset was introduced to the computer vision and multimedia research community. To maximize the benefit for the research community and utilize its potential, this dataset has to be made accessible by tools allowing to search for target concepts within the dataset and mechanism to browse images and videos of the dataset. Following best practice from data collections, such as ImageNet and MS COCO, this paper presents means of accessibility for the YFCC100m dataset. This includes a global analysis of the dataset and an online browser to explore and investigate subsets of the dataset in real-time. Providing statistics of the queried images and videos will enable researchers to refine their query successively, such that the users desired subset of interest can be narrowed down quickly. The final set of image and video can be downloaded as URLs from the browser for further processing.",
"title": ""
},
{
"docid": "de2ed315762d3f0ac34fe0b77567b3a2",
"text": "A study in vitro of specimens of human aortic and common carotid arteries was carried out to determine the feasibility of direct measurement (i.e., not from residual lumen) of arterial wall thickness with B mode real-time imaging. Measurements in vivo by the same technique were also obtained from common carotid arteries of 10 young normal male subjects. Aortic samples were classified as class A (relatively normal) or class B (with one or more atherosclerotic plaques). In all class A and 85% of class B arterial samples a characteristic B mode image composed of two parallel echogenic lines separated by a hypoechoic space was found. The distance between the two lines (B mode image of intimal + medial thickness) was measured and correlated with the thickness of different combinations of tunicae evaluated by gross and microscopic examination. On the basis of these findings and the results of dissection experiments on the intima and adventitia we concluded that results of B mode imaging of intimal + medial thickness did not differ significantly from the intimal + medial thickness measured on pathologic examination. With respect to the accuracy of measurements obtained by B mode imaging as compared with pathologic findings, we found an error of less than 20% for measurements in 77% of normal and pathologic aortic walls. In addition, no significant difference was found between B mode-determined intimal + medial thickness in the common carotid arteries evaluated in vitro and that determined by this method in vivo in young subjects, indicating that B mode imaging represents a useful approach for the measurement of intimal + medial thickness of human arteries in vivo.",
"title": ""
},
{
"docid": "386bcf00ecc6ff1e21a8b06632cdf77e",
"text": "With an interactive simulation of the ENIAC, users can wire complex configurations of the machine's modules. The simulation, written in Java, can be started from an Internet site. The simulation has been tested with a 6-meter-long data wall, which provides the closest available approximation to the look and feel of programming this historical computer.",
"title": ""
},
{
"docid": "841c200c322e596e414b16c719927ca0",
"text": "A novel compact ultrawideband (UWB) printed slot antenna with three extra bands for various wireless applications is presented. The low-profile antenna consists of an octagonal-shaped slot fed by a beveled and stepped rectangular patch for covering the UWB band (3.1-10.6 GHz). By attaching three inverted U-shaped strips at the upper part of the slot in the ground, additional triple linear polarized bands can be realized covering GPS (1520-1590 MHz), part of GSM (1770-1840 MHz), and Bluetooth (2385-2490 MHz). Simulated and measured results are presented and compared, which shows that the antenna has a stable radiation pattern both at the triple and the whole of the UWB bands.",
"title": ""
},
{
"docid": "8c0d117602ecadee24215f5529e527c6",
"text": "We present the first open-set language identification experiments using one-class classification models. We first highlight the shortcomings of traditional feature extraction methods and propose a hashing-based feature vectorization approach as a solution. Using a dataset of 10 languages from different writing systems, we train a One-Class Support Vector Machine using only a monolingual corpus for each language. Each model is evaluated against a test set of data from all 10 languages and we achieve an average F-score of 0.99, demonstrating the effectiveness of this approach for open-set language identification.",
"title": ""
},
{
"docid": "6efa91e21aa822f319e32157fe2c3ce4",
"text": "We present a new approach to semi-supervised anomaly detection. Given a set of training examples believed to come from the same distribution or class, the task is to learn a model that will be able to distinguish examples in the future that do not belong to the same class. Traditional approaches typically compare the position of a new data point to the set of ``normal'' training data points in a chosen representation of the feature space. For some data sets, the normal data may not have discernible positions in feature space, but do have consistent relationships among some features that fail to appear in the anomalous examples. Our approach learns to predict the values of training set features from the values of other features. After we have formed an ensemble of predictors, we apply this ensemble to new data points. To combine the contribution of each predictor in our ensemble, we have developed a novel, information-theoretic anomaly measure that our experimental results show selects against noisy and irrelevant features. Our results on 47 data sets show that for most data sets, this approach significantly improves performance over current state-of-the-art feature space distance and density-based approaches.",
"title": ""
},
{
"docid": "0a2e59ab99b9666d8cf3fb31be9fa40c",
"text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "0c0b099a2a4a404632a1f065cfa328c4",
"text": "Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms—Forest (pyQuil), QISKit, ProjectQ, and the Quantum Developer Kit—that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.",
"title": ""
},
{
"docid": "90fc857db7207f0a94dd91fbaa48be4f",
"text": "We present a computational origami construction of Morley’s triangles and automated proof of correctness of the generalized Morley’s theorem in a streamlined process of solving-computing-proving. The whole process is realized by a computational origami system being developed by us. During the computational origami construction, geometric constraints in symbolic and numeric representation are generated and accumulated. Those constraints are then transformed into algebraic relations, which in turn are used to prove the correctness of the construction. The automated proof required non-trivial amount of computer resources, and shows the necessity of networked services of mathematical software. This example is considered to be a case study for innovative mathematical knowledge management.",
"title": ""
},
{
"docid": "2b362c476a2081ce59bade9b845f10e4",
"text": "OBJECTIVE\nTo assess the clinical outcome of crushed cartilage grafts used to conceal contour irregularities in rhinoplasty.\n\n\nMETHODS\nWe reviewed the medical records of 462 patients in whom crushed autogenous cartilage grafts were used, selected from a total of 669 patients in whom rhinoplasty procedures were performed at our institution between June 1, 1999, and June 1, 2006. The grafts were used as slightly, moderately, significantly, or severely crushed.\n\n\nRESULTS\nEight hundred nine cartilage grafts (41 slightly crushed grafts [5%], 650 moderately crushed grafts [80%], and 118 significantly crushed grafts [15%]) were used in 462 patients. Resorption occurred in 11 of the 462 patients (2.4%). All of the resorbed grafts (6 moderately crushed grafts and 5 significantly crushed grafts) had been placed in the dorsal area. The resorption rate of those grafts was lower in the moderately crushed cartilage grafts (6 of 284 grafts [2.1%]) than in the significantly crushed grafts (5 of 38 grafts [13.1%]). There was no resorption of slightly crushed grafts.\n\n\nCONCLUSIONS\nThe degree of crushing applied is important for long-term clinical outcome of autogenous crushed cartilage grafts. Slight or moderate crushing of cartilage creates an outstanding graft material for concealing irregularities and provides both excellent long-term clinical outcome and predictable esthetic results.",
"title": ""
},
{
"docid": "5968bec60ac0f41d6d1f06c23880f6fa",
"text": "Object recognition in X-ray images is an interesting application of machine vision that can help reduce the workload of human operators of X-ray scanners at security checkpoints. However, automatic inspection systems using machine vision techniques are not yet commonplace for generic threat detection in X-ray images. Moreover, this problem has not been well explored by machine vision community due to the lack of publicly available X-ray image datasets. This paper aims to fill in this gap. We first present a comprehensive evaluation of image classification and object detection in X-ray images using standard local features in a BoW framework with (structural) SVMs. Then, we extend the features to utilize the extra information available in dual energy X-ray images. Finally, we propose a multi-view branch-and-bound algorithm for multiview object detection. Through extensive experiments on three object categories (laptops, guns, bottles), we show that the classification and detection performance substantially improves with the extended features and multiple views.",
"title": ""
},
{
"docid": "f1cfb30b328725121ed232381d43ac3a",
"text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1",
"title": ""
},
{
"docid": "7bf137d513e7a310e121eecb5f59ae27",
"text": "BACKGROUND\nChildren with intellectual disability are at heightened risk for behaviour problems and diagnosed mental disorder.\n\n\nMETHODS\nThe present authors studied the early manifestation and continuity of problem behaviours in 205 pre-school children with and without developmental delays.\n\n\nRESULTS\nBehaviour problems were quite stable over the year from age 36-48 months. Children with developmental delays were rated higher on behaviour problems than their non-delayed peers, and were three times as likely to score in the clinical range. Mothers and fathers showed high agreement in their rating of child problems, especially in the delayed group. Parenting stress was also higher in the delayed group, but was related to the extent of behaviour problems rather than to the child's developmental delay.\n\n\nCONCLUSIONS\nOver time, a transactional model fit the relationship between parenting stress and behaviour problems: high parenting stress contributed to a worsening in child behaviour problems over time, and high child behaviour problems contributed to a worsening in parenting stress. Findings for mothers and fathers were quite similar.",
"title": ""
},
{
"docid": "b2abd93f4e580ee2e0304432b69f4ae7",
"text": "In this work, we present a Reinforcement Learning (RL) based approach for autonomous driving in highway scenarios, including interaction with other vehicles. The method used is Fitted Q-iteration [1] with Extremely Randomized Trees [2] as a function approximator. We demonstrate that Reinforcement Learning based concepts can be successfully applied and can be used to teach a RL agent to drive autonomously in an intelligent way, by following traffic rules and ensuring safety. By combining RL with the already established control concepts, we managed to build an agent that achieved promising results in the realistic simulated environment.",
"title": ""
},
{
"docid": "aed97de827b675d3ddb3e04274f73428",
"text": "In paid search advertising on Internet search engines, advertisers bid for specific keywords, e.g. “Rental Cars LAX,” to display a text ad in the sponsored section of the search results page. The advertiser is charged when a user clicks on the ad. Many of the keywords in paid search campaigns generate few, if any, sales conversions – even over several months. This sparseness makes it difficult to assess the profit performance of individual keywords and has led to the practice of managing large groups of keywords together or relying on easy-to-calculate heuristics such as click-through rate (CTR). The authors develop a model of individual keyword conversion that addresses the sparseness problem. Conversion rates are estimated using a hierarchical Bayes binary choice model. This enables conversion to be based on both word-level covariates and shrinkage across keywords. The model is applied to keyword-level paid search data containing daily information on impressions, clicks and reservations for a major lodging chain. The results show that including keyword-level covariates and heterogeneity significantly improves conversion estimates. A holdout comparison suggests that campaign management based on the model, i.e., estimated costper-sale on a keyword level, would outperform existing managerial strategies.",
"title": ""
}
] |
scidocsrr
|
f0d1e1f9d284af3137ac37351f0c5082
|
A Combination of Object Recognition and Localisation for an Autonomous Racecar
|
[
{
"docid": "4250ae1e0b2c662b98171acaeaa35028",
"text": "For many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping capabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.",
"title": ""
},
{
"docid": "a77eddf9436652d68093946fbe1d2ed0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] |
[
{
"docid": "5ce46dd6704793798ca6c24fedfa611c",
"text": "We introduce the URIEL knowledge base for massively multilingual NLP and the lang2vec utility, which provides information-rich vector identifications of languages drawn from typological, geographical, and phylogenetic databases that are normalized to have straightforward and consistent formats, naming, and semantics. The goal of URIEL and lang2vec is to enable multilingual NLP, especially on less-resourced languages and make possible types of experiments (especially but not exclusively related to NLP tasks) that are otherwise difficult or impossible due to the sparsity and incommensurability of the data sources. lang2vec vectors have been shown to reduce perplexity in multilingual language modeling, when compared to one-hot language identification vectors.",
"title": ""
},
{
"docid": "9f870b08b07296e9dc85bee7d10858a2",
"text": "A common problem in computer tomography (CT) based imaging of the oral cavity is artefacts caused by dental restorations. The aim of this study was to investigate whether magnetic resonance imaging (MRI) of the oral cavity would be less affected than CT by artefacts caused by typical dental restorative alloys. In order to assess the extent of artefact generation, corresponding MRI scans of the same anatomic region with and without dental metal restorations were matched using a stereotactic frame. MRI imaging of the oral and maxillofacial region could be performed without reduction of the image quality by metallic dental restorations made from titanium, gold or amalgam. Dental restorations made from titanium, gold or amalgam did not reduce the image quality of the MRI sequence used in imaging of the oral and maxillofacial region for dental implant planning. In this respect MRI is superior to CT in implant planning.",
"title": ""
},
{
"docid": "df5778fce3318029d249de1ff37b0715",
"text": "The Switched Reluctance Machine (SRM) is a robust machine and is a candidate for ultra high speed applications. Until now the area of ultra high speed machines has been dominated by permanent magnet machines (PM). The PM machine has a higher torque density and some other advantages compared to SRMs. However, the soaring prices of the rare earth materials are driving the efforts to find an alternative to PM machines without significantly impacting the performance. At the same time significant progress has been made in the design and control of the SRM. This paper reviews the progress of the SRM as a high speed machine and proposes a novel rotor structure design to resolve the challenge of high windage losses at ultra high speed. It then elaborates on the path of modifying the design to achieve optimal performance. The simulation result of the final design is verified on FEA software. Finally, a prototype machine with similar design is built and tested to verify the simulation model. The experimental waveform indicates good agreement with the simulation result. Therefore, the performance of the prototype machine is analyzed and presented at the end of this paper.",
"title": ""
},
{
"docid": "93df3ce5213252f8ae7dbd396ebb71bd",
"text": "Role-Based Access Control (RBAC) has been the dominant access control model in industry since the 1990s. It is widely implemented in many applications, including major cloud platforms such as OpenStack, AWS, and Microsoft Azure. However, due to limitations of RBAC, there is a shift towards Attribute-Based Access Control (ABAC) models to enhance flexibility by using attributes beyond roles and groups. In practice, this shift has to be gradual since it is unrealistic for existing systems to abruptly adopt ABAC models, completely eliminating current RBAC implementations.In this paper, we propose an ABAC extension with user attributes for the OpenStack Access Control (OSAC) model and demonstrate its enforcement utilizing the Policy Machine (PM) developed by the National Institute of Standards and Technology. We utilize some of the PM's components along with a proof-of-concept implementation to enforce this ABAC extension for OpenStack, while keeping OpenStack's current RBAC architecture in place. This provides the benefits of enhancing access control flexibility with support of user attributes, while minimizing the overhead of altering the existing OpenStack access control framework. We present use cases to depict added benefits of our model and show enforcement results. We then evaluate the performance of our proposed ABAC extension, and discuss its applicability and possible performance enhancements.",
"title": ""
},
{
"docid": "a272d084fe7032e7f3c6df5a2e6bec8e",
"text": "In the work of one of us (A.W.) on the conjecture that all elliptic curves defined over Q are modular, the importance of knowing that certain Hecke algebras are complete intersections was established. The purpose of this article is to provide the missing ingredient in [W2] by establishing that the Hecke algebras considered there are complete intersections. As is recorded in [W2], a method going back to Mazur [M] allows one to show that these algebras are Gorenstein, but this seems to be too weak for the purposes of that paper. The methods of this paper are related to those of chapter 3 of [W2]. We would like to thank Henri Darmon, Fred Diamond and Gerd Faltings for carefully reading the first version of this article. Gerd Faltings has also suggested a simplification of our argument and we would like to thank him for allowing us to reproduce this in the appendix to this paper. R.T. would like to thank A.W. for his invitation to collaborate on these problems and for sharing his many insights into the questions considered. R.T. would also like to thank Princeton University, Université de Paris 7 and Harvard University for their hospitality during some of the work on this paper. A.W. was supported by an NSF grant.",
"title": ""
},
{
"docid": "4d41939b70ecd86ba1a82df3b89a0717",
"text": "The analysis and design of a millimeter-wave conical conformal shaped-beam substrate-integrated waveguide (SIW) array antenna is demonstrated in this paper. After investigating the influence of the conical surface on the propagation characteristics of a conformal SIW, a modification for the width of a conical conformal SIW is proposed to obtain the same propagation characteristic along the longitudinal direction. This feature is indispensable to employ the classic equivalent circuit of a planar slot array antenna in the design of a conical conformal antenna. In this case, the design process of the conformal antenna can be simplified. An efficient and accurate model method of the conical conformal SIW antenna is presented as well. Then, a design process of the conical conformal SIW slot array antenna is introduced. Furthermore, to implement the transition between a conical surface and a cylindrical surface, a flexible SIWtransition is designed with a good impedance matching. Finally, two low sidelobe level (SLL) SIW conical conformal antennas with and without the flexible transitions are designed. Both of them have −28 dB SLLs in H-plane at the center frequency of 35 GHz.",
"title": ""
},
{
"docid": "6d8a413767d9fab8ef3ca22daaa0e921",
"text": "Query-oriented summarization addresses the problem of information overload and help people get the main ideas within a short time. Summaries are composed by sentences. So, the basic idea of composing a salient summary is to construct quality sentences both for user specific queries and multiple documents. Sentence embedding has been shown effective in summarization tasks. However, these methods lack of the latent topic structure of contents. Hence, the summary lies only on vector space can hardly capture multi-topical content. In this paper, our proposed model incorporates the topical aspects and continuous vector representations, which jointly learns semantic rich representations encoded by vectors. Then, leveraged by topic filtering and embedding ranking model, the summarization can select desirable salient sentences. Experiments demonstrate outstanding performance of our proposed model from the perspectives of prominent topics and semantic coherence.",
"title": ""
},
{
"docid": "a7623185df940b128af6187d7d1e0b9c",
"text": "Inflammasomes are high-molecular-weight protein complexes that are formed in the cytosolic compartment in response to danger- or pathogen-associated molecular patterns. These complexes enable activation of an inflammatory protease caspase-1, leading to a cell death process called pyroptosis and to proteolytic cleavage and release of pro-inflammatory cytokines interleukin (IL)-1β and IL-18. Along with caspase-1, inflammasome components include an adaptor protein, ASC, and a sensor protein, which triggers the inflammasome assembly in response to a danger signal. The inflammasome sensor proteins are pattern recognition receptors belonging either to the NOD-like receptor (NLR) or to the AIM2-like receptor family. While the molecular agonists that induce inflammasome formation by AIM2 and by several other NLRs have been identified, it is not well understood how the NLR family member NLRP3 is activated. Given that NLRP3 activation is relevant to a range of human pathological conditions, significant attempts are being made to elucidate the molecular mechanism of this process. In this review, we summarize the current knowledge on the molecular events that lead to activation of the NLRP3 inflammasome in response to a range of K (+) efflux-inducing danger signals. We also comment on the reported involvement of cytosolic Ca (2+) fluxes on NLRP3 activation. We outline the recent advances in research on the physiological and pharmacological mechanisms of regulation of NLRP3 responses, and we point to several open questions regarding the current model of NLRP3 activation.",
"title": ""
},
{
"docid": "a7b0f0455482765efd3801c3ae9f85b7",
"text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.",
"title": ""
},
{
"docid": "ff941ef3217a11602d7be2889856180d",
"text": "Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.",
"title": ""
},
{
"docid": "4162c6bbaac397ff24e337fa4af08abd",
"text": "We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and we classify the intent of the spoken utterance based on the lattice embedding computed by a LATTICERNN. We show that making decisions based on the full ASR output lattice, as opposed to 1-best or n-best hypotheses, makes SLU systems more robust to ASR errors. Our experiments yield improvements of 13% over a baseline RNN system trained on transcriptions and 10% over an nbest list rescoring system for intent classification.",
"title": ""
},
{
"docid": "65ed76a0642b3dd58c99b07c35fc635d",
"text": "A novel dual-layer multibeam pillbox antenna with a slotted waveguide radiating part in substrate-integrated waveguide (SIW) technology is proposed. In contrast to previous works, the design goal is to have a multibeam antenna with arbitrary low sidelobes and at the same time a high crossing level between adjacent radiated beams. These two constraints cannot be satisfied simultaneously for any passive and lossless multibeam antenna systems with a single radiating aperture due to beam orthogonality. Here, this limitation is overcome using the “split aperture decoupling” method which consists in using two radiating apertures. Each aperture is associated with a pillbox quasi-optical system with several integrated feed horns in its focal plane so as to steer the main beam in the azimuthal plane. The antenna operates at 24.15 GHz and presents very good scanning performance over an angular sector of ±40°, with a good agreement between full-wave simulations and measurements. The crossover level between adjacent beams is about -3 dB with a sidelobe level lower than -24 dB for the central beam and better than -11 dB for the extreme beam positions. The isolation between feed horns in the same pillbox system is better than 20 dB.",
"title": ""
},
{
"docid": "85d8b05b8292bedb0e22feb1b26a31b5",
"text": "We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.",
"title": ""
},
{
"docid": "d7907565c4ea6782cdb0c7b281a9d636",
"text": "Acute appendicitis (AA) is among the most common cause of acute abdominal pain. Diagnosis of AA is challenging; a variable combination of clinical signs and symptoms has been used together with laboratory findings in several scoring systems proposed for suggesting the probability of AA and the possible subsequent management pathway. The role of imaging in the diagnosis of AA is still debated, with variable use of US, CT and MRI in different settings worldwide. Up to date, comprehensive clinical guidelines for diagnosis and management of AA have never been issued. In July 2015, during the 3rd World Congress of the WSES, held in Jerusalem (Israel), a panel of experts including an Organizational Committee and Scientific Committee and Scientific Secretariat, participated to a Consensus Conference where eight panelists presented a number of statements developed for each of the eight main questions about diagnosis and management of AA. The statements were then voted, eventually modified and finally approved by the participants to The Consensus Conference and lately by the board of co-authors. The current paper is reporting the definitive Guidelines Statements on each of the following topics: 1) Diagnostic efficiency of clinical scoring systems, 2) Role of Imaging, 3) Non-operative treatment for uncomplicated appendicitis, 4) Timing of appendectomy and in-hospital delay, 5) Surgical treatment 6) Scoring systems for intra-operative grading of appendicitis and their clinical usefulness 7) Non-surgical treatment for complicated appendicitis: abscess or phlegmon 8) Pre-operative and post-operative antibiotics.",
"title": ""
},
{
"docid": "80ccc8b5f9e68b5130a24fe3519b9b62",
"text": "A MIMO antenna of size 40mm × 40mm × 1.6mm is proposed for WLAN applications. Antenna consists of four mushroom shaped Apollonian fractal planar monopoles having micro strip feed lines with edge feeding. It uses defective ground structure (DGS) to achieve good isolation. To achieve more isolation, the antenna elements are placed orthogonal to each other. Further, isolation can be increased using parasitic elements between the elements of antenna. Simulation is done to study reflection coefficient as well as coupling between input ports, directivity, peak gain, efficiency, impedance and VSWR. Results show that MIMO antenna has a bandwidth of 1.9GHZ ranging from 5 to 6.9 GHz, and mutual coupling of less than -20dB.",
"title": ""
},
{
"docid": "d774759e03329d0cc5611ab9104f8299",
"text": "The flexibility of neural networks is a very powerful property. In many cases, these changes lead to great improvements in accuracy compared to basic models that we discussed in the previous tutorial. In the last part of the tutorial, I will also explain how to parallelize the training of neural networks. This is also an important topic because parallelizing neural networks has played an important role in the current deep learning movement.",
"title": ""
},
{
"docid": "c5beaa8be086776c769caedc30815aa8",
"text": "Three studies were conducted to examine the correlates of adult attachment. In Study 1, an 18-item scale to measure adult attachment style dimensions was developed based on Kazan and Shaver's (1987) categorical measure. Factor analyses revealed three dimensions underlying this measure: the extent to which an individual is comfortable with closeness, feels he or she can depend on others, and is anxious or fearful about such things as being abandoned or unloved. Study 2 explored the relation between these attachment dimensions and working models of self and others. Attachment dimensions were found to be related to self-esteem, expressiveness, instrumentality, trust in others, beliefs about human nature, and styles of loving. Study 3 explored the role of attachment style dimensions in three aspects of ongoing dating relationships: partner matching on attachment dimensions; similarity between the attachment of one's partner and caregiving style of one's parents; and relationship quality, including communication, trust, and satisfaction. Evidence was obtained for partner matching and for similarity between one's partner and one's parents, particularly for one's opposite-sex parent. Dimensions of attachment style were strongly related to how each partner perceived the relationship, although the dimension of attachment that best predicted quality differed for men and women. For women, the extent to which their partner was comfortable with closeness was the best predictor of relationship quality, whereas the best predictor for men was the extent to which their partner was anxious about being abandoned or unloved.",
"title": ""
},
{
"docid": "241c020b8dfe347e362e20dfcd98f419",
"text": "The old electricity network infrastructure has proven to be inadequate, with respect to modern challenges such as alternative energy sources, electricity demand and energy saving policies. Moreover, Information and Communication Technologies (ICT) seem to have reached an adequate level of reliability and flexibility in order to support a new concept of electricity network—the smart grid. In this work, we will analyse the state-of-the-art of smart grids, in their technical, management, security, and optimization aspects. We will also provide a brief overview of the regulatory aspects involved in the development of a smart grid, mainly from the viewpoint of the European Union.",
"title": ""
},
{
"docid": "c6cdcc4fbcb95ce3938ab9e837daa70d",
"text": "In this paper, we study the problem of fractional-order PID controller design for an unstable plant-a laboratory model of a magnetic levitation system. To this end, we apply model based control design. A model of the magnetic lévitation system is obtained by means of a closed-loop experiment. Several stable fractional-order controllers are identified and optimized by considering isolated stability regions. Finally, a nonintrusive controller retuning method is used to incorporate fractional-order dynamics into the existing control loop, thereby enhancing its performance. Experimental results confirm the effectiveness of the proposed approach. Control design methods offered in this paper are general enough to be applicable to a variety of control problems.",
"title": ""
},
{
"docid": "9afdeab9abb1bfde45c6e9f922181c6b",
"text": "Aiming at the need for autonomous learning in reinforcement learning (RL), a quantitative emotion-based motivation model is proposed by introducing psychological emotional factors as the intrinsic motivation. The curiosity is used to promote or hold back agents' exploration of unknown states, the happiness index is used to determine the current state-action's happiness level, the control power is used to indicate agents' control ability over its surrounding environment, and together to adjust agents' learning preferences and behavioral patterns. To combine intrinsic emotional motivations with classic RL, two methods are proposed. The first method is to use the intrinsic emotional motivations to explore unknown environment and learn the environment transitioning model ahead of time, while the second method is to combine intrinsic emotional motivations with external rewards as the ultimate joint reward function, directly to drive agents' learning. As the result shows, in the simulation experiments in the rat foraging in maze scenario, both methods have achieved relatively good performance, compared with classic RL purely driven by external rewards.",
"title": ""
}
] |
scidocsrr
|
9fa9f6e114c662ae25445f9caf004af2
|
Bitcoin-NG: A Scalable Blockchain Protocol
|
[
{
"docid": "0f0799a04328852b8cfa742cbc2396c9",
"text": "Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers (PSPs). Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network.",
"title": ""
}
] |
[
{
"docid": "6e4d8bde993e88fa2c729d2fafb6fd90",
"text": "The plant hormones gibberellin and abscisic acid regulate gene expression, secretion and cell death in aleurone. The emerging picture is of gibberellin perception at the plasma membrane whereas abscisic acid acts at both the plasma membrane and in the cytoplasm - although gibberellin and abscisic acid receptors have yet to be identified. A range of downstream-signalling components and events has been implicated in gibberellin and abscisic acid signalling in aleurone. These include the Galpha subunit of a heterotrimeric G protein, a transient elevation in cGMP, Ca2+-dependent and Ca2+-independent events in the cytoplasm, reversible protein phosphory-lation, and several promoter cis-elements and transcription factors, including GAMYB. In parallel, molecular genetic studies on mutants of Arabidopsis that show defects in responses to these hormones have identified components of gibberellin and abscisic acid signalling. These two approaches are yielding results that raise the possibility that specific gibberellin and abscisic acid signalling components perform similar functions in aleurone and other tissues.",
"title": ""
},
{
"docid": "699c2891ce4988901f4b5a6b390906a3",
"text": "In this work, we address the problem of cross-modal retrieval in presence of multi-label annotations. In particular, we introduce multi-label Canonical Correlation Analysis (ml-CCA), an extension of CCA, for learning shared subspaces taking into account high level semantic information in the form of multi-label annotations. Unlike CCA, ml-CCA does not rely on explicit pairing between modalities, instead it uses the multi-label information to establish correspondences. This results in a discriminative subspace which is better suited for cross-modal retrieval tasks. We also present Fast ml-CCA, a computationally efficient version of ml-CCA, which is able to handle large scale datasets. We show the efficacy of our approach by conducting extensive cross-modal retrieval experiments on three standard benchmark datasets. The results show that the proposed approach achieves state of the art retrieval performance on the three datasets.",
"title": ""
},
{
"docid": "65ffbc6ee36ae242c697bb81ff3be23a",
"text": "Full-duplex hands-free telecommunication systems employ an acoustic echo canceler (AEC) to remove the undesired echoes that result from the coupling between a loudspeaker and a microphone. Traditionally, the removal is achieved by modeling the echo path impulse response with an adaptive finite impulse response (FIR) filter and subtracting an echo estimate from the microphone signal. It is not uncommon that an adaptive filter with a length of 50-300 ms needs to be considered, which makes an AEC highly computationally expensive. In this paper, we propose an echo suppression algorithm to eliminate the echo effect. Instead of identifying the echo path impulse response, the proposed method estimates the spectral envelope of the echo signal. The suppression is done by spectral modification-a technique originally proposed for noise reduction. It is shown that this new approach has several advantages over the traditional AEC. Properties of human auditory perception are considered, by estimating spectral envelopes according to the frequency selectivity of the auditory system, resulting in improved perceptual quality. A conventional AEC is often combined with a post-processor to reduce the residual echoes due to minor echo path changes. It is shown that the proposed algorithm is insensitive to such changes. Therefore, no post-processor is necessary. Furthermore, the new scheme is computationally much more efficient than a conventional AEC.",
"title": ""
},
{
"docid": "87343436b0ea16f9683360fd84506331",
"text": "Accurate measurements of soil macronutrients (i.e., nitrogen, phosphorus, and potassium) are needed for efficient agricultural production, including site-specific crop management (SSCM), where fertilizer nutrient application rates are adjusted spatially based on local requirements. Rapid, non-destructive quantification of soil properties, including nutrient levels, has been possible with optical diffuse reflectance sensing. Another approach, electrochemical sensing based on ion-selective electrodes or ion-selective field effect transistors, has been recognized as useful in real-time analysis because of its simplicity, portability, rapid response, and ability to directly measure the analyte with a wide range of sensitivity. Current sensor developments and related technologies that are applicable to the measurement of soil macronutrients for SSCM are comprehensively reviewed. Examples of optical and electrochemical sensors applied in soil analyses are given, while advantages and obstacles to their adoption are discussed. It is proposed that on-the-go vehicle-based sensing systems have potential for efficiently and rapidly characterizing variability of soil macronutrients within a field.",
"title": ""
},
{
"docid": "70bee569e694c92b79bd5e7dc586cbdc",
"text": "Synchronous reluctance machines (SynRM) have been used widely in industries for instance, in ABB's new VSD product package based on SynRM technology. It is due to their unique merits such as high efficiency, fast dynamic response, and low cost. However, considering the major requirements for traction applications such as high torque and power density, low torque ripple, wide speed range, proper size, and capability of meeting a specific torque envelope, this machine is still under investigation to be developed for traction applications. Since the choice of motor for traction is generally determined by manufacturers with respect to three dominant factors: cost, weight, and size, the SynRM can be considered a strong alternative due to its high efficiency and lower cost. Hence, the machine's proper size estimation is a major step of the design process before attempting the rotor geometry design. This is crucial in passenger vehicles in which compactness is a requirement and the size and weight are indeed the design limitations. This paper presents a methodology for sizing a SynRM. The electric and magnetic parameters of the proposed machine in conjunction with the core dimensions are calculated. Then, the proposed method's validity and evaluation are done using FE analysis.",
"title": ""
},
{
"docid": "c8482ed26ba2c4ba1bd3eed6ac0e00b4",
"text": "Virtual Reality (VR) has now emerged as a promising tool in many domains of therapy and rehabilitation (Rizzo, Schultheis, Kerns & Mateer, 2004; Weiss & Jessel, 1998; Zimand, Anderson, Gershon, Graap, Hodges, & Rothbaum, 2002; Glantz, Rizzo & Graap, 2003). Continuing advances in VR technology along with concomitant system cost reductions have supported the development of more usable, useful, and accessible VR systems that can uniquely target a wide range of physical, psychological, and cognitive rehabilitation concerns and research questions. What makes VR application development in the therapy and rehabilitation sciences so distinctively important is that it represents more than a simple linear extension of existing computer technology for human use. VR offers the potential to create systematic human testing, training and treatment environments that allow for the precise control of complex dynamic 3D stimulus presentations, within which sophisticated interaction, behavioral tracking and performance recording is possible. Much like an aircraft simulator serves to test and train piloting ability, virtual environments (VEs) can be developed to present simulations that assess and rehabilitate human functional performance under a range of stimulus conditions that are not easily deliverable and controllable in the real world. When combining these assets within the context of functionally relevant, ecologically enhanced VEs, a fundamental advancement could emerge in how human functioning can be addressed in many rehabilitation disciplines.",
"title": ""
},
{
"docid": "115e2a6c5f8fdd3a8a720fcdf0cf3a6d",
"text": "In this work we present an Artificial Neural Network (ANN) approach to predict stock market indices. In particular, we focus our attention on their trend movement up or down. We provide results of experiments exploiting different Neural Networks architectures, namely the Multi-layer Perceptron (MLP), the Convolutional Neural Networks (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks technique. We show importance of choosing correct input features and their preprocessing for learning algorithm. Finally we test our algorithm on the S&P500 and FOREX EUR/USD historical time series, predicting trend on the basis of data from the past n days, in the case of S&P500, or minutes, in the FOREX framework. We provide a novel approach based on combination of wavelets and CNN which outperforms basic neural networks approaches. Key–Words: Artificial neural networks, Multi-layer neural network, Convolutional neural network, Long shortterm memory, Recurrent neural network, Deep Learning, Stock markets, Time series analysis, financial forecasting",
"title": ""
},
{
"docid": "b3c9bc55f5a9d64a369ec67e1364c4fc",
"text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.",
"title": ""
},
{
"docid": "da3634b5a14829b22546389e56425017",
"text": "Homomorphic encryption (HE)—the ability to perform computations on encrypted data—is an attractive remedy to increasing concerns about data privacy in the field of machine learning. However, building models that operate on ciphertext is currently labor-intensive and requires simultaneous expertise in deep learning, cryptography, and software engineering. Deep learning frameworks, together with recent advances in graph compilers, have greatly accelerated the training and deployment of deep learning models to a variety of computing platforms. Here, we introduce nGraph-HE, an extension of the nGraph deep learning compiler, which allows data scientists to deploy trained models with popular frameworks like TensorFlow, MXNet and PyTorch directly, while simply treating HE as another hardware target. This combination of frameworks and graph compilers greatly simplifies the development of privacy-preserving machine learning systems, provides a clean abstraction barrier between deep learning and HE, allows HE libraries to exploit HE-specific graph optimizations, and comes at a low cost in runtime overhead versus native HE operations.",
"title": ""
},
{
"docid": "fd3dd59550806b93a625f6e6750e888f",
"text": "Location-based services have become widely available on mobile devices. Existing methods employ a pull model or user-initiated model, where a user issues a query to a server which replies with location-aware answers. To provide users with instant replies, a push model or server-initiated model is becoming an inevitable computing model in the next-generation location-based services. In the push model, subscribers register spatio-textual subscriptions to capture their interests, and publishers post spatio-textual messages. This calls for a high-performance location-aware publish/subscribe system to deliver publishers' messages to relevant subscribers.In this paper, we address the research challenges that arise in designing a location-aware publish/subscribe system. We propose an rtree based index structure by integrating textual descriptions into rtree nodes. We devise efficient filtering algorithms and develop effective pruning techniques to improve filtering efficiency. Experimental results show that our method achieves high performance. For example, our method can filter 500 tweets in a second for 10 million registered subscriptions on a commodity computer.",
"title": ""
},
{
"docid": "b62b8862d26e5ce5bcbd2b434aff5d0e",
"text": "In this demo paper we present Docear's research paper recommender system. Docear is an academic literature suite to search, organize, and create research articles. The users' data (papers, references, annotations, etc.) is managed in mind maps and these mind maps are utilized for the recommendations. Using content-based filtering methods, Docear's recommender achieves click-through rates around 6%, in some scenarios even over 10%.",
"title": ""
},
{
"docid": "ed0d234b961befcffab751f70f5c5fdb",
"text": "UNLABELLED\nA challenging aspect of managing patients on venoarterial extracorporeal membrane oxygenation (V-A ECMO) is a thorough understanding of the relationship between oxygenated blood from the ECMO circuit and blood being pumped from the patient's native heart. We present an adult V-A ECMO case report, which illustrates a unique encounter with the concept of \"dual circulations.\" Despite blood gases from the ECMO arterial line showing respiratory acidosis, this patient with cardiogenic shock demonstrated regional respiratory alkalosis when blood was sampled from the right radial arterial line. In response, a sample was obtained from the left radial arterial line, which mimicked the ECMO arterial blood but was dramatically different from the blood sampled from the right radial arterial line. A retrospective analysis of patient data revealed that the mismatch of blood gas values in this patient corresponded to an increased pulse pressure. Having three arterial blood sampling sites and data on the patient's pulse pressure provided a dynamic view of blood mixing and guided proper management, which contributed to a successful patient outcome that otherwise may not have occurred. As a result of this unique encounter, we created and distributed graphics representing the concept of \"dual circulations\" to facilitate the education of ECMO specialists at our institution.\n\n\nKEYWORDS\nECMO, education, cardiopulmonary bypass, cannulation.",
"title": ""
},
{
"docid": "508eb69a9e6b0194fbda681439e404c4",
"text": "Price forecasting is becoming increasingly relevant to producers and consumers in the new competitive electric power markets. Both for spot markets and long-term contracts, price forecasts are necessary to develop bidding strategies or negotiation skills in order to maximize benefit. This paper provides a method to predict next-day electricity prices based on the ARIMA methodology. ARIMA techniques are used to analyze time series and, in the past, have been mainly used for load forecasting due to their accuracy and mathematical soundness. A detailed explanation of the aforementioned ARIMA models and results from mainland Spain and Californian markets are presented.",
"title": ""
},
{
"docid": "6e5792c73b34eacc7bef2c8777da5147",
"text": "Neural network machine translation systems have recently demonstrated encouraging results. We examine the performance of a recently proposed recurrent neural network model for machine translation on the task of Japanese-to-English translation. We observe that with relatively little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary. The success of this model on a small corpus warrants more investigation of its performance on a larger corpus.",
"title": ""
},
{
"docid": "9280eb309f7a6274eb9d75d898768f56",
"text": "In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "f519d349d928e7006955943043ab0eae",
"text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.",
"title": ""
},
{
"docid": "aec7ed67f393650953c5dc99d0d66a38",
"text": "BACKGROUND\nThe pes cavus deformity has been well described in the literature; relative bony positions have been determined and specific muscle imbalances have been summarized. However, we are unaware of a cadaveric model that has been used to generate this foot pathology. The purpose of this study was to create such a model for future work on surgical and conservative treatment simulation.\n\n\nMATERIALS AND METHODS\nWe used a custom designed, pneumatically actuated loading frame to apply forces to otherwise normal cadaveric feet while measuring bony motion as well as force beneath the foot. The dorsal tarsometatarsal and the dorsal intercuneiform ligaments were attenuated and three muscle imbalances, each similar to imbalances believed to cause the pes cavus deformity, were applied while bony motion and plantar forces were measured.\n\n\nRESULTS\nOnly one of the muscle imbalances (overpull of the Achilles tendon, tibialis anterior, tibialis posterior, flexor hallucis longus and flexor digitorum longus) was successful at consistently generating the changes seen in pes cavus feet. This imbalance led to statistically significant changes including hindfoot inversion, talar dorsiflexion, medial midfoot plantar flexion and inversion, forefoot plantar flexion and adduction and an increase in force on the lateral mid- and forefoot.\n\n\nCONCLUSION\nWe have created a cadaveric model that approximates the general changes of the pes cavus deformity compared to normal feet. These changes mirror the general patterns of deformity produced by several disease mechanisms.\n\n\nCLINICAL RELEVANCE\nFuture work will entail increasing the severity of the model and exploring various pes cavus treatment strategies.",
"title": ""
},
{
"docid": "b1a69a47cce9ecc51b03d8b4a306e605",
"text": "We use an innovative survey tool to collect management practice data from 732 medium sized manufacturing firms in the US and Europe (France, Germany and the UK). Our measures of managerial best practice are strongly associated with superior firm performance in terms of productivity, profitability, Tobin’s Q, sales growth and survival. We also find significant intercountry variation with US firms on average better managed than European firms, but a much greater intra-country variation with a long tail of extremely badly managed firms. This presents a dilemma – why do so many firms exist with apparently inferior management practices, and why does this vary so much across countries? We find this is due to a combination of: (i) low product market competition and (ii) family firms passing management control down to the eldest sons (primo geniture). European firms in our sample report facing lower levels of competition, and substantially higher levels of primo geniture. These two factors appear to account for around half of the long tail of badly managed firms and half of the average US-Europe gap in management performance.",
"title": ""
},
{
"docid": "745451b3ca65f3388332232b370ea504",
"text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.",
"title": ""
},
{
"docid": "c18cec45829e4aec057443b9da0eeee5",
"text": "This paper presents a synthesis on the application of fuzzy integral as an innovative tool for criteria aggregation in decision problems. The main point is that fuzzy integrals are able to model interaction between criteria in a flexible way. The methodology has been elaborated mainly in Japan, and has been applied there successfully in various fields such as design, reliability, evaluation of goods, etc. It seems however that this technique is still very little known in Europe. It is one of the aim of this review to disseminate this emerging technology in many industrial fields.",
"title": ""
}
] |
scidocsrr
|
e1604f5dfdba7acaf3ab611b6798c8ec
|
Practicing Differential Privacy in Health Care: A Review
|
[
{
"docid": "27d1a769a678c50fad957bbd832212b5",
"text": "The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution: in some examples we generate output that is 1000 times smaller than the naive method, in less than 1% of the time while achieving comparable and occasionally improved utility over the costly materialization approach.",
"title": ""
},
{
"docid": "e49aa0d0f060247348f8b3ea0a28d3c6",
"text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"title": ""
},
{
"docid": "6b5c3a9f31151ef62f19085195ff5fc5",
"text": "We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy.\n Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy.\n We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise.\n We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides.",
"title": ""
}
] |
[
{
"docid": "a050ae6738a8c511b8942deb19155b7c",
"text": "Electrocardiogram (ECG) measurement without skin-contact is essential for u-healthcare. ECG measurement using capacitive-coupled electrode (CC-electrode) is a well-known method for unconstrained ECG measurement. Although the CC-electrode has the advantage of non-contact measurement, common mode noise is increased, which decreases the signal-to-noise ratio (SNR). In this study, we proposed non-contact ECG measurement system using CC-electrode and driven circuit to reduce noise. The components of driven circuit were similar to those of driven-right-leg circuit and conductive sheet was employed for driven electrode to contact uniformly to the body over clothes. We evaluated the performance of the driven circuit under different conditions, including a contact area to the body and a gain of the driven circuit to find out a relationship between them and the SNR of ECG. As the results, as contact area became larger and gain became higher, SNR increased.",
"title": ""
},
{
"docid": "2833dbe3c3e576a3ba8f175a755b6964",
"text": "The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.",
"title": ""
},
{
"docid": "3cc0707cec7af22db42e530399e762a8",
"text": "While watching television, people increasingly consume additional content related to what they are watching. We consider the task of finding video content related to a live television broadcast for which we leverage the textual stream of subtitles associated with the broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our dynamic query modeling approach significantly outperforms state-of-the-art baselines for stationary query modeling and for text-based retrieval in a television setting. In particular we find that carefully weighting terms and decaying these weights based on recency significantly improves effectiveness. Moreover, our method is highly efficient and can be used in a live television setting, i.e., in near real time.",
"title": ""
},
{
"docid": "fd94c0639346e760cf2c19aab7847270",
"text": "During the last two decades, a great number of applications for the dc-to-dc converters have been reported [1]. Many applications are found in computers, telecommunications, aeronautics, commercial, and industrial applications. The basic topologies buck, boost, and buck-boost, are widely used in the dc-to-dc conversion. These converters, as well as other converters, provide low voltages and currents for loads at a constant switching frequency. In recent years, there has been a need for wider conversion ratios with a corresponding reduction in size and weight. For example, advances in the field of semiconductors have motivated the development of new integrated circuits, which require 3.3 or 1.5 V power supplies. The automotive industry is moving from 12 V (14 V) to 36 V (42 V), the above is due to the electric-electronic load in automobiles has been growing rapidly and is starting to exceed the practical capacity of present-day electrical systems. Today, the average 12 V (14 V) load is between 750 W to 1 kW, while the peak load can be 2 kW, depending of the type of car and its accessories. By 2005, peak loads above 2 kW, even as high as 12 kW, will be common. To address this challenge, it is widely agreed that a",
"title": ""
},
{
"docid": "4aa0f3a526c1ca44ab84ebd2e8fc4dc6",
"text": "Blockchain is so far well-known for its potential applications in financial and banking sectors. However, blockchain as a decentralized and distributed technology can be utilized as a powerful tool for immense daily life applications. Healthcare is one of the prominent applications area among others where blockchain is supposed to make a strong impact. It is generating wide range of opportunities and possibilities in current healthcare systems. Therefore, this paper is all about exploring the potential applications of blockchain technology in current healthcare systems and highlights the most important requirements to fulfill the need of such systems such as trustless and transparent healthcare systems. In addition, this work also presents the challenges and obstacles needed to resolve before the successful adoption of blockchain technology in healthcare systems. Furthermore, we introduce the smart contract for blockchain based healthcare systems which is key for defining the pre-defined agreements among various involved stakeholders.",
"title": ""
},
{
"docid": "e0cd28f3b36cd83c556cb829aab782d3",
"text": "In this work we present In-Place Activated Batch Normalization (INPLACE-ABN) - a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Also, we demonstrate how frequently used checkpointing approaches can be made computationally as efficient as INPLACE-ABN. In our experiments on image classification, we demonstrate on-par results on ImageNet-1k with state-of-the-art approaches. On the memory-demanding task of semantic segmentation, we report competitive results for COCO-Stuff and set new state-of-the-art results for Cityscapes and Mapillary Vistas. Code can be found at https://github.com/mapillary/inplace_abn.",
"title": ""
},
{
"docid": "9cf48e5fa2cee6350ac31f236696f717",
"text": "Komatiites are rare ultramafic lavas that were produced most commonly during the Archean and Early Proterozoic and less frequently in the Phanerozoic. These magmas provide a record of the thermal and chemical characteristics of the upper mantle through time. The most widely cited interpretation is that komatiites were produced in a plume environment and record high mantle temperatures and deep melting pressures. The decline in their abundance from the Archean to the Phanerozoic has been interpreted as primary evidence for secular cooling (up to 500‡C) of the mantle. In the last decade new evidence from petrology, geochemistry and field investigations has reopened the question of the conditions of mantle melting preserved by komatiites. An alternative proposal has been rekindled: that komatiites are produced by hydrous melting at shallow mantle depths in a subduction environment. This alternative interpretation predicts that the Archean mantle was only slightly (V100‡C) hotter than at present and implicates subduction as a process that operated in the Archean. Many thermal evolution and chemical differentiation models of the young Earth use the plume origin of komatiites as a central theme in their model. Therefore, this controversy over the mechanism of komatiite generation has the potential to modify widely accepted views of the Archean Earth and its subsequent evolution. This paper briefly reviews some of the pros and cons of the plume and subduction zone models and recounts other hypotheses that have been proposed for komatiites. We suggest critical tests that will improve our understanding of komatiites and allow us to better integrate the story recorded in komatiites into our view of early Earth evolution. 6 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4a54e98bfbe66b7733e824ba8d093a66",
"text": "The radiation behavior of the fractional-order, resonant mode within a circular sector cavity radiator is revealed at first, and then, it is employed to design a novel, planar quasi-isotropic magnetic dipole antenna. A set of closed-form formulas is derived and employed to determine the key parameters of the proposed antenna. The resultant circular sector magnetic dipole antenna operates at its dominant TM(2/3), 1 mode. It is numerically verified and experimentally validated at the 2.45-GHz band. It is seen that the antenna exhibits a good non-uniformity of less than 5.7 dB within the three principal planes, and an average radiation efficiency up to 82% within its impedance bandwidth from 2.4 to 2.5 GHz (for reflection coefficient smaller than −10 dB). Good agreement between the theoretical, simulated, and measured results has evidently verified the proposed antenna design approach.",
"title": ""
},
{
"docid": "17fcb38734d6525f2f0fa3ee6c313b43",
"text": "The increasing generation and collection of personal data h as created a complex ecosystem, often collaborative but som etimes combative, around companies and individuals engaging in th e use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Inter action (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer s cience, statistics, sociology, psychology and behavioura l economics. We expose the challenges that HDI raises, organised into thr ee core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interest ed parties in the personal and big data ecosystems.",
"title": ""
},
{
"docid": "06107b781329d004deb228e100d33d2d",
"text": "This manuscript examines the measurement instrument developed from the ability model of EI (Mayer and Salovey, 1997), the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer, Salovey and Caruso, 2002). The four subtests, scoring methods, psychometric properties, reliability, and factor structure of the MSCEIT are discussed, with a special focus on the discriminant, convergent, predictive, and incremental validity of the test. The authors review associations between MSCEIT scores and important outcomes such as academic performance, cognitive processes, psychological well-being, depression, anxiety, prosocial and maladaptive behavior, and leadership and organizational behavior. Findings regarding the low correlations between MSCEIT scores and self-report measures of EI also are presented. In the conclusion the authors' provide potential directions for future research on emotional intelligence.",
"title": ""
},
{
"docid": "18d28769691fb87a6ebad5aae3eae078",
"text": "The current head Injury Assessment Reference Values (IARVs) for the child dummies are based in part on scaling adult and animal data and on reconstructions of real world accident scenarios. Reconstruction of well-documented accident scenarios provides critical data in the evaluation of proposed IARV values, but relatively few accidents are sufficiently documented to allow for accurate reconstructions. This reconstruction of a well documented fatal-fall involving a 23-month old child supplies additional data for IARV assessment. The videotaped fatal-fall resulted in a frontal head impact onto a carpet-covered cement floor. The child suffered an acute right temporal parietal subdural hematoma without skull fracture. The fall dynamics were reconstructed in the laboratory and the head linear and angular accelerations were quantified using the CRABI-18 Anthropomorphic Test Device (ATD). Peak linear acceleration was 125 ± 7 g (range 114-139), HIC15 was 335 ± 115 (Range 257-616), peak angular velocity was 57± 16 (Range 26-74), and peak angular acceleration was 32 ± 12 krad/s 2 (Range 15-56). The results of the CRABI-18 fatal fall reconstruction were consistent with the linear and rotational tolerances reported in the literature. This study investigates the usefulness of the CRABI-18 anthropomorphic testing device in forensic investigations of child head injury and aids in the evaluation of proposed IARVs for head injury. INTRODUCTION Defining the mechanisms of injury and the associated tolerance of the pediatric head to trauma has been the focus of a great deal of research and effort. In contrast to the multiple cadaver experimental studies of adult head trauma published in the literature, there exist only a few experimental studies of infant head injury using human pediatric cadaveric tissue [1-6]. While these few studies have been very informative, due to limitations in sample size, experimental equipment, and study objectives, current estimates of the tolerance of the pediatric head are based on relatively few pediatric cadaver data points combined with the use of scaled adult and animal data. In effort to assess and refine these tolerance estimates, a number of researchers have performed detailed accident reconstructions of well-documented injury scenarios [7-11] . The reliability of the reconstruction data are predicated on the ability to accurately reconstruct the actual accident and quantify the result in a useful injury metric(s). These resulting injury metrics can then be related to the injuries of the child and this, when combined with other reliable reconstructions, can form an important component in evaluating pediatric injury mechanisms and tolerance. Due to limitations in case identification, data collection, and resources, relatively few reconstructions of pediatric accidents have been performed. In this study, we report the results of the reconstruction of an uncharacteristically well documented fall resulting in a fatal head injury of a 23 month old child. The case study was previously reported as case #5 by Plunkett [12]. BACKGROUND As reported by Plunkett (2001), A 23-month-old was playing on a plastic gym set in the garage at her home with her older brother. She had climbed the attached ladder to the top rail above the platform and was straddling the rail, with her feet 0.70 meters (28 inches) above the floor. She lost her balance and fell headfirst onto a 1-cm (3⁄8-inch) thick piece of plush carpet remnant covering the concrete floor. She struck the carpet first with her outstretched hands, then with the right front side of her forehead, followed by her right shoulder. Her grandmother had been watching the children play and videotaped the fall. She cried after the fall but was alert",
"title": ""
},
{
"docid": "8056b29e7b39dee06f04b738807a53f9",
"text": "This paper proposes a novel topology of a multiport DC/DC converter composed of an H-bridge inverter, a high-frequency galvanic isolation transformer, and a combined circuit with a current-doubler and a buck chopper. The topology has lower conduction loss by multiple current paths and smaller output capacitors by means of an interleave operation. Results of computer simulations and experimental tests show proper operations and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "bc35cf4a278b9e764a8a521507bd68d4",
"text": "Radio Frequency Identification (RFID) and Near Field Communication (NFC) technology is popular for item tracking and secured communication system. Both technologies make use of 13.56MHz radio wave and loop antenna.",
"title": ""
},
{
"docid": "ae1f75aa978fd702be9b203487269517",
"text": "This paper presents a system that performs skill extraction from text documents. It outputs a list of professional skills that are relevant to a given input text. We argue that the system can be practical for hiring and management of personnel in an organization. We make use of the texts and the hyperlink graph of Wikipedia, as well as a list of professional skills obtained from the LinkedIn social network. The system is based on first computing similarities between an input document and the texts of Wikipedia pages and then using a biased, hub-avoiding version of the Spreading Activation algorithm on the Wikipedia graph in order to associate the input document with skills.",
"title": ""
},
{
"docid": "680e9f3b5aeb02822c8889044517f2ec",
"text": "Currently, there are many large, automatically constructed knowledge bases (KBs). One interesting task is learning from a knowledge base to generate new knowledge either in the form of inferred facts or rules that define regularities. One challenge for learning is that KBs are necessarily open world: we cannot assume anything about the truth values of tuples not included in the KB. When a KB only contains facts (i.e., true statements), which is typically the case, we lack negative examples, which are often needed by learning algorithms. To address this problem, we propose a novel score function for evaluating the quality of a first-order rule learned from a KB. Our metric attempts to include information about the tuples not in the KB when evaluating the quality of a potential rule. Empirically, we find that our metric results in more precise predictions than previous approaches.",
"title": ""
},
{
"docid": "fc0470776583df8b25114abc8709b045",
"text": "Certified Registered Nurse Anesthetists (CRNAs) have been providing anesthesia care in the United States (US) for nearly 150 years. Historically, anesthesia care for surgical patients was mainly provided by trained nurses under the supervision of surgeons until the establishment of anesthesiology as a medical specialty in the US. Currently, all 50 US states utilize CRNAs to perform various kinds of anesthesia care, either under the medical supervision of anesthesiologists in most states, or independently without medical supervision in 16 states; the latter has become an on-going source of conflict between anesthesiologists and CRNAs. Understanding the history and current conditions of anesthesia practice in the US is crucial for countries in which the shortage of anesthesia care providers has become a national issue.",
"title": ""
},
{
"docid": "680523e1eaa7abb7556655313875d353",
"text": "Our aim in this paper is to clarify the range of motivations that have inspired the development of computer programs for the composition of music. We consider this to be important since different methodologies are appropriate for different motivations and goals. We argue that a widespread failure to specify the motivations and goals involved has lead to a methodological malaise in music related research. A brief consideration of some of the earliest attempts to produce computational systems for the composition of music leads us to identify four activities involving the development of computer programs which compose music each of which is inspired by different practical or theoretical motivations. These activities are algorithmic composition, the design of compositional tools, the computational modelling of musical styles and the computational modelling of music cognition. We consider these four motivations in turn, illustrating the problems that have arisen from failing to distinguish between them. We propose a terminology that clearly differentiates the activities defined by the four motivations and present methodological suggestions for research in each domain. While it is clearly important for researchers to embrace developments in related disciplines, we argue that research in the four domains will continue to stagnate unless the motivations and aims of research projects are clearly stated and appropriate methodologies are adopted for developing and evaluating systems that compose music.",
"title": ""
},
{
"docid": "1e6c497fe53f8cba76bd8b432c618c1f",
"text": "inputs into digital (down or up), analog (-1.0 to 1.0), and positional (touch and • mouse cursor). By building on a solid main loop you can easily add support for detecting chorded inputs and sequence inputs.",
"title": ""
},
{
"docid": "c35341d3b82dd4921e752b4b774cd501",
"text": "The initial concept of a piezoelectric transformer (PT) was proposed by C.A. Rosen, K. Fish, and H.C. Rothenberg and is described in the U.S. Patent 2,830,274, applied for in 1954. Fifty years later, this technology has become one of the most promising alternatives for replacing the magnetic transformers in a wide range of applications. Piezoelectric transformers convert electrical energy into electrical energy by using acoustic energy. These devices are typically manufactured using piezoelectric ceramic materials that vibrate in resonance. With appropriate designs it is possible to step-up and step-down the voltage between the input and output of the piezoelectric transformer, without making use of wires or any magnetic materials. This technology did not reach commercial success until early the 90s. During this period, several companies, mainly in Japan, decided to introduce PTs for applications requiring small size, high step-up voltages, and low electromagnetic interference (EMI) signature. These PTs were developed based on optimizations of the initial Rosen concept, and thus typically referred to as “Rosen-type PTs”. Today’s, PTs are used for backlighting LCD displays in notebook computers, PDAs, and other handheld devices. The PT yearly sales estimate was about over 20 millions in 2000 and industry sources report that production of piezoelectric transformers in Japan is growing steadily at a rate of 10% annually. The reliability achieved in LCD applications and the advances in the related technologies (materials, driving circuitry, housing and manufacturing) have currently spurred enormous interest and confidence in expanding this technology to other fields of application. This, consequently, is expanding the business opportunities for PTs. Currently, the industry trend is moving in two directions: low-cost product market and valueadded product market. Prices of PTs have been declining in recent years, and this trend is expected to continue. Soon (if not already), this technology will become a serious candidate for replacing the magnetic transformers in cost-sensitive applications. Currently, leading makers are reportedly focusing on more value-added products. Two of the key value-added areas are miniaturization and higher output power. Piezoelectric transformers for power applications require lower output impedances, high power capabilities and high efficiency under step-down conditions. Among the different PT designs proposed as alternatives to the classical Rosen configuration, Transoner laminated radial PT has been demonstrated as the most promising technology for achieving high power levels. Higher powers than 100W, with power densities in the range of 30-40 W/cm2 have been demonstrated. Micro-PTs are currently being developed with sizes of less than 5mm diameter and 1mm thickness allowing up to 0.5W power transfer and up to 50 times gain. Smaller sizes could be in the future integrated to power MEMs systems. This paper summarizes the state of the art on the PT technology and introduces the current trends of this industry. HISTORICAL INTRODUCTION It has been 50 years since the development of piezoelectric ceramic transformers began. The first invention on piezoelectric transformers (PTs) has been traditionally associated with the patent of Charles A. Rosen et al., which was disclosed on January 4, 1954 and finally granted on April 8, 1958 [1]. Briefly after this first application, on September 17, 1956, H.Jaffe and Don A. Berlincourt, on behalf of the Clevite Companies, applied for the second patent on PT technology, which was granted on Jan. 24, 1961 [2]. Since then, the PT technology has been growing simultaneously with the progress in piezoceramic technology as well as with the electronics in general. Currently, it is estimated that 25-30 millions of PTs are annually sold commercially for different applications. Thus, the growth of the technology is promising and is expected to expand to many other areas as an alternative to magnetic transformers. In attempt to be historically accurate, it is required to mention that the first studies on PTs initially took place in the late 20s and early 30s. Based on the research of the author of this paper, Alexander McLean Nicolson has the honor of being the first researcher to consider the idea of a piezoelectric transformer. In his patent US1829234 titled “Piezo-electric crystal transformer” [3], Nicolson describes the first research in this field. The work of Nicolson on piezoelectric transformers, recognized in several other patents [4], was limited to the use of piezoelectric crystals with obvious limitations in performance, design and applicability as compared to the later developed piezoceramic materials. Piezoelectric transformers (from now on referred to as piezoelectric ceramic transformers), like magnetic devices, are basically energy converters. A magnetic transformer operates by converting electrical input to magnetic energy and then reconverting that magnetic energy back to electrical output. A PT has an analogous operating mechanism. It converts an electrical input into mechanical energy and subsequently reconverts this mechanical energy back to an electrical output. This mechanical conversion is achieved by a standing wave vibrating at a frequency equal to a multiple of the mechanical resonance frequency of the transformer body, which is typically in the range of 50 to 150 kHz. Recently, PTs operating at 1MHz and higher have also been proposed. Piezoelectric transformers were initially considered as high voltage transformer devices. Two different designs driving the initial steps in the development on these “conventional” PTs were, the so-called Rosen-type PT designs and the contour extensional mode uni-poled PTs. Until early in 90s, the technology evolution was based on improvements in these two basic designs. Although Rosen proposed several types of PT embodiments in his patents and publications, the name of “Rosen-type PT” currently refers to those PTs representing an evolution on the initial rectangular design idea proposed by C. Rosen in 1954, as shown in Figure 1.",
"title": ""
}
] |
scidocsrr
|
34bbcfce1c78182b1dd68e8efb7849e3
|
Anomaly detection using baseline and K-means clustering
|
[
{
"docid": "fdc903a98097de8b7533b3e2fe209863",
"text": "As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems—the cyberspace’s equivalent to the burglar alarm—join ranks with firewalls as one of the fundamental technologies for network security. However, today’s commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ‘‘zero day’’ attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "34993e22f91f3d5b31fe0423668a7eb1",
"text": "K-means as a clustering algorithm has been studied in intrusion detection. However, with the deficiency of global search ability it is not satisfactory. Particle swarm optimization (PSO) is one of the evolutionary computation techniques based on swarm intelligence, which has high global search ability. So K-means algorithm based on PSO (PSO-KM) is proposed in this paper. Experiment over network connection records from KDD CUP 1999 data set was implemented to evaluate the proposed method. A Bayesian classifier was trained to select some fields in the data set. The experimental results clearly showed the outstanding performance of the proposed method",
"title": ""
}
] |
[
{
"docid": "514dd8425b91525cab1631ff8c358bbb",
"text": "Embedded programming is typically made accessible through modular electronics toolkits. In this paper, we explore an alternative approach, combining microcontrollers with craft materials and processes as a means of bringing new groups of people and skills to technology production. We have developed simple and robust techniques for drawing circuits with conductive ink on paper, enabling off-the-shelf electronic components to be embedded directly into interactive artifacts. We have also developed an set of hardware and software tools -- an instance of what we call an \"untoolkit\" -- to provide an accessible toolchain for the programming of microcontrollers. We evaluated our techniques in a number of workshops, one of which is detailed in the paper. Four broader themes emerge: accessibility and appeal, the integration of craft and technology, microcontrollers vs. electronic toolkits, and the relationship between programming and physical artifacts. We also expand more generally on the idea of an untoolkit, offering a definition and some design principles, as well as suggest potential areas of future research.",
"title": ""
},
{
"docid": "692adf7c8f656823a41b72350cf06269",
"text": "Mindfulness-based interventions are increasingly used in the treatment and prevention of mental health conditions. Despite this, the mechanisms of change for such interventions are only beginning to be understood, with a number of recent studies assessing changes in brain activity. The aim of this systematic review was to assess changes in brain functioning associated with manualised 8-session mindfulness interventions. Searches of PubMed and Scopus databases resulted in 39 papers, 7 of which were eligible for inclusion. The most consistent longitudinal effect observed was increased insular cortex activity following mindfulness-based interventions. In contrast to previous reviews, we did not find robust evidence for increased activity in specific prefrontal cortex sub-regions. These findings suggest that mindfulness interventions are associated with changes in functioning of the insula, plausibly impacting awareness of internal reactions 'in-the-moment'. The studies reviewed here demonstrated a variety of effects across populations and tasks, pointing to the need for greater consistency in future study design.",
"title": ""
},
{
"docid": "0153774b49121d8735cc3d33df69fc00",
"text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.",
"title": ""
},
{
"docid": "25c8d687e6044ae734270bb0d7fd8868",
"text": "Continual learning broadly refers to the algorithms which aim to learn continuously over time across varying domains, tasks or data distributions. This is in contrast to algorithms restricted to learning a fixed number of tasks in a given domain, assuming a static data distribution. In this survey we aim to discuss a wide breadth of challenges faced in a continual learning setup and review existing work in the area. We discuss parameter regularization techniques to avoid catastrophic forgetting in neural networks followed by memory based approaches and the role of generative models in assisting continual learning algorithms. We discuss how dynamic neural networks assist continual learning by endowing neural networks with a new capacity to learn further. We conclude by discussing possible future directions.",
"title": ""
},
{
"docid": "0784c4f87530aab020dbb8f15cba3127",
"text": "As mechanical end-effectors, microgrippers enable the pick–transport–place of micrometer-sized objects, such as manipulation and positioning of biological cells in an aqueous environment. This paper reports on a monolithic MEMS-based microgripper with integrated force feedback along two axes and presents the first demonstration of forcecontrolled micro-grasping at the nanonewton force level. The system manipulates highly deformable biomaterials (porcine interstitial cells) in an aqueous environment using a microgripper that integrates a V-beam electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5 nN) and the other for gripping force measurements (force resolution: 19.9 nN). The MEMS-based microgripper and the force control system experimentally demonstrate the capability of rapid contact detection and reliable force-controlled micrograsping to accommodate variations in size and mechanical properties of objects with a high reproducibility. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "94b00d09c303d92a44c08fb211c7a8ed",
"text": "Pull-Request (PR) is the primary method for code contributions from thousands of developers in GitHub. To maintain the quality of software projects, PR review is an essential part of distributed software development. Assigning new PRs to appropriate reviewers will make the review process more effective which can reduce the time between the submission of a PR and the actual review of it. However, reviewer assignment is now organized manually in GitHub. To reduce this cost, we propose a reviewer recommender to predict highly relevant reviewers of incoming PRs. Combining information retrieval with social network analyzing, our approach takes full advantage of the textual semantic of PRs and the social relations of developers. We implement an online system to show how the reviewer recommender helps project managers to find potential reviewers from crowds. Our approach can reach a precision of 74% for top-1 recommendation, and a recall of 71% for top-10 recommendation.",
"title": ""
},
{
"docid": "fff53c626db93d568b4e9e6c13ef6f86",
"text": "We give a correspondence between enriched categories and the Gauss-Kleene-Floyd-Warshall connection familiar to computer scientists. This correspondence shows this generalization of categories to be a close cousin to the generalization of transitive closure algorithms. Via this connection we may bring categorical and 2-categorical constructions into an active but algebraically impoverished arena presently served only by semiring constructions. We illustrate these techniques by applying them to Birkoff’s poset arithmetic, interpretable as an algebra of “true concurrency.” The Floyd-Warshall algorithm for generalized transitive closure [AHU74] is the code fragment for v do for u, w do δuw + = δuv · δvw. Here δuv denotes an entry in a matrix δ, or equivalently a label on the edge from vertex u to vertex v in a graph. When the matrix entries are truth values 0 or 1, with + and · interpreted respectively as ∨ and ∧, we have Warshall’s algorithm for computing the transitive closure δ+ of δ, such that δ+ uv = 1 just when there exists a path in δ from u to v. When the entries are nonnegative reals, with + as min and · as addition, we have Floyd’s algorithm for computing all shortest paths in a graph: δ+ uv is the minimum, over all paths from u to v in δ, of the sum of the edges of each path. Other instances of this algorithm include Kleene’s algorithm for translating finite automata into regular expressions, and Gauss’s algorithm for inverting a matrix, in each case with an appropriate choice of semiring. Not only are these algorithms the same up to interpretation of the data, but so are their correctness proofs. This begs for a unifying framework, which is found in the notion of semiring. A semiring is a structure differing from a ring principally in that its additive component is not a group but merely a monoid, see AHU [AHU74] for a more formal treatment. Other matrix problems and algorithms besides Floyd-Warshall, such as matrix multiplication and the various recursive divide-and-conquer approaches to closure, also lend themselves to this abstraction. This abstraction supports mainly vertex-preserving operations on such graphs. Typical operations are, given two graphs δ, on a common set of vertices, to form their pointwise sum δ + defined as (δ + )uv = δuv + uv, their matrix product δ defined as (δ )uv = δu− · −v (inner product), along with their transitive, symmetric, and reflexive closures, all on the same vertex set. We would like to consider other operations that combine distinct vertex sets in various ways. The two basic operations we have in mind are the disjoint union and cartesian product of such graphs, along with such variations of these operations as pasting (as not-so-disjoint union), concatenation (as a disjoint union with additional edges from one component to the other), etc. An efficient way to obtain a usefully large library of such operations is to impose an appropriate categorical structure on the collection of such graphs. In this paper we show how to use enriched categories to provide such structure while at the same time extending the notion of semiring to the more general notion of monoidal category. In so doing we find two layers of categorical structure: 1 enriched categories in the lower layer, as a generalization of graphs, and ordinary categories in the upper layer having enriched categories for its objects. The graph operations we want to define are expressible as limits and colimits in the upper (ordinary) categories. We first make a connection between the two universes of graph theory and category theory. We assume at the outset that vertices of graphs correspond to objects of categories, both for ordinary categories and enriched categories. The interesting part is how the edges are treated. The underlying graph U(C) of a category C consists of the objects and morphisms of C, with no composition law or identities. But there may be more than one morphism between any two vertices, whereas in graph theory one ordinarily allows just one edge. These “multigraphs” of category theory would therefore appear to be a more general notion than the directed graphs of graph theory. A staple of graph theory however is the label, whether on a vertex or an edge. If we regard a homset as an edge labeled with a set then a multigraph is the case of an edge-labeled graph where the labels are sets. So a multigraph is intermediate in generality between a directed graph and an edge-labeled directed graph. So starting from graphs whose edges are labeled with sets, we may pass to categories by specifying identities and a composition law, or we may pass to edge-labeled graphs by allowing other labels than sets. What is less obvious is that we can elegantly and usefully do both at once, giving rise to enriched categories. The basic ideas behind enriched categories can be traced to Mac Lane [Mac65], with much of the detail worked out by Eilenberg and Kelly [EK65], with the many subsequent developments condensed by Kelly [Kel82]. Lawvere [Law73] provides a highly readable account of the concepts. We require of the edge labels only that they form a monoidal category. Roughly speaking this is a set bearing the structure of both a category and a monoid. Formally a monoidal category D = 〈D,⊗, I, α, λ, ρ〉 is a category D = 〈D0,m, i〉, a functor ⊗:D2 → D, an object I of D, and three natural isomorphisms α: c ⊗ (d ⊗ e) → (c ⊗ d) ⊗ e, λ: I ⊗ d → d, and ρ: d ⊗ I → d. (Here c⊗ (d⊗ e) and (c⊗ d)⊗ e denote the evident functors from D3 to D, and similarly for I ⊗ d, d⊗ I and d as functors from D to D, where c, d, e are variables ranging over D.) These correspond to the three basic identities of the equational theory of monoids. To complete the definition of monoidal category we require a certain coherence condition, namely that the other identities of that theory be “generated” in exactly one way from these, see Mac Lane [Mac71] for details. A D-category, or (small) category enriched in a monoidal category D, is a quadruple 〈V, δ,m, i〉 consisting of a set V (which we think of as vertices of a graph), a function δ:V 2 → D0 (the edgelabeling function), a family m of morphisms muvw: δ(u, v)⊗δ(v, w) → δ(u, w) of D (the composition law), and a family i of morphisms iu: I → δ(u, u) (the identities), satisfying the following diagrams. (δ(u, v)⊗ δ(v, w))⊗ δ(w, x) αδ(u,v)δ(v,w)δ(w,x) > δ(u, v)⊗ (δ(v, w)⊗ δ(w, x)) muvw ⊗ 1 ∨ 1⊗mvwx ∨ δ(u, w)⊗ δ(w, x) muwx > δ(u, x) < muvx δ(u, v)⊗ δ(v, x)",
"title": ""
},
{
"docid": "7757fe9470f4def8fcec8021b3974519",
"text": "Reaction prediction and retrosynthesis are the cornerstones of organic chemistry. Rule-based expert systems have been the most widespread approach to computationally solve these two related challenges to date. However, reaction rules often fail because they ignore the molecular context, which leads to reactivity conflicts. Herein, we report that deep neural networks can learn to resolve reactivity conflicts and to prioritize the most suitable transformation rules. We show that by training our model on 3.5 million reactions taken from the collective published knowledge of the entire discipline of chemistry, our model exhibits a top10-accuracy of 95 % in retrosynthesis and 97 % for reaction prediction on a validation set of almost 1 million reactions.",
"title": ""
},
{
"docid": "3deced64cd17210f7e807e686c0221af",
"text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.",
"title": ""
},
{
"docid": "1227c910d47e61be05def5e80e462688",
"text": "Motivation\nThe identification of novel drug-target (DT) interactions is a substantial part of the drug discovery process. Most of the computational methods that have been proposed to predict DT interactions have focused on binary classification, where the goal is to determine whether a DT pair interacts or not. However, protein-ligand interactions assume a continuum of binding strength values, also called binding affinity and predicting this value still remains a challenge. The increase in the affinity data available in DT knowledge-bases allows the use of advanced learning techniques such as deep learning architectures in the prediction of binding affinities. In this study, we propose a deep-learning based model that uses only sequence information of both targets and drugs to predict DT interaction binding affinities. The few studies that focus on DT binding affinity prediction use either 3D structures of protein-ligand complexes or 2D features of compounds. One novel approach used in this work is the modeling of protein sequences and compound 1D representations with convolutional neural networks (CNNs).\n\n\nResults\nThe results show that the proposed deep learning based model that uses the 1D representations of targets and drugs is an effective approach for drug target binding affinity prediction. The model in which high-level representations of a drug and a target are constructed via CNNs achieved the best Concordance Index (CI) performance in one of our larger benchmark datasets, outperforming the KronRLS algorithm and SimBoost, a state-of-the-art method for DT binding affinity prediction.\n\n\nAvailability and implementation\nhttps://github.com/hkmztrk/DeepDTA.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "536d4a66e0e60b810e758dedf56ea5a9",
"text": "Erasure coding is an established data protection mechanism. It provides high resiliency with low storage overhead, which makes it very attractive to storage systems developers. Unfortunately, when used in a distributed setting, erasure coding hampers a storage system's performance, because it requires clients to contact several, possibly remote sites to retrieve their data. This has hindered the adoption of erasure coding in practice, limiting its use to cold, archival data. Recent research showed that it is feasible to use erasure coding for hot data as well, thus opening new perspectives for improving erasure-coded storage systems. In this paper, we address the problem of minimizing access latency in erasure-coded storage. We propose Agar-a novel caching system tailored for erasure-coded content. Agar optimizes the contents of the cache based on live information regarding data popularity and access latency to different data storage sites. Our system adapts a dynamic programming algorithm to optimize the choice of data blocks that are cached, using an approach akin to \"Knapsack\" algorithms. We compare Agar to the classical Least Recently Used and Least Frequently Used cache eviction policies, while varying the amount of data cached between a data chunk and a whole replica of the object. We show that Agar can achieve 16% to 41% lower latency than systems that use classical caching policies.",
"title": ""
},
{
"docid": "6921cd9c2174ca96ec0061ae2dd881eb",
"text": "Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games' popularity and dedicated fan base are testaments to the allure of novel social interactions granted to people by allowing them an alternative life as a new character and persona. In this paper we investigate the phenomenon of \"gender swapping,\" which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped. We also discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG quality and detecting online identity frauds.",
"title": ""
},
{
"docid": "d7156d395b4bf8b3fc7b5a7472b30a66",
"text": "Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still challenging because:(i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterance-level sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model's synchronized attention over modalities offers visual interpretability.",
"title": ""
},
{
"docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1",
"text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.",
"title": ""
},
{
"docid": "3bd55f1a745aae146bb29e63b51fa85a",
"text": "Employing mixed-method approach, this case study examined the in situ use of educational computer games in a summer math program to facilitate 4th and 5th graders’ cognitive math achievement, metacognitive awareness, and positive attitudes toward math learning. The results indicated that students developed more positive attitudes toward math learning through five-week computer math gaming, but there was no significant effect of computer gaming on students’ cognitive test performance or metacognitive awareness development. The in-field observation and students’ think-aloud protocol informed that not every computer math drill game would engage children in committed learning. The study findings have highlighted the value of situating learning activities within the game story, making games pleasantly challenging, scaffolding reflections, and designing suitable off-computer activities. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5cf396e42e8708d768235f95bc8f227f",
"text": "This thesis examines how artificial neural networks can benefit a large vocabulary, speaker independent, continuous speech recognition system. Currently, most speech recognition systems are based on hidden Markov models (HMMs), a statistical framework that supports both acoustic and temporal modeling. Despite their state-of-the-art performance, HMMs make a number of suboptimal modeling assumptions that limit their potential effectiveness. Neural networks avoid many of these assumptions, while they can also learn complex functions, generalize effectively, tolerate noise, and support parallelism. While neural networks can readily be applied to acoustic modeling, it is not yet clear how they can be used for temporal modeling. Therefore, we explore a class of systems called NN-HMM hybrids, in which neural networks perform acoustic modeling, and HMMs perform temporal modeling. We argue that a NN-HMM hybrid has several theoretical advantages over a pure HMM system, including better acoustic modeling accuracy, better context sensitivity, more natural discrimination, and a more economical use of parameters. These advantages are confirmed experimentally by a NN-HMM hybrid that we developed, based on context-independent phoneme models, that achieved 90.5% word accuracy on the Resource Management database, in contrast to only 86.0% accuracy achieved by a pure HMM under similar conditions. In the course of developing this system, we explored two different ways to use neural networks for acoustic modeling: prediction and classification. We found that predictive networks yield poor results because of a lack of discrimination, but classification networks gave excellent results. We verified that, in accordance with theory, the output activations of a classification network form highly accurate estimates of the posterior probabilities P(class|input), and we showed how these can easily be converted to likelihoods P(input|class) for standard HMM recognition algorithms. Finally, this thesis reports how we optimized the accuracy of our system with many natural techniques, such as expanding the input window size, normalizing the inputs, increasing the number of hidden units, converting the network’s output activations to log likelihoods, optimizing the learning rate schedule by automatic search, backpropagating error from word level outputs, and using gender dependent networks.",
"title": ""
},
{
"docid": "e43cc845368e69ef1278e7109d4d8d6f",
"text": "Estimating six degrees of freedom poses of a planar object from images is an important problem with numerous applications ranging from robotics to augmented reality. While the state-of-the-art Perspective-n-Point algorithms perform well in pose estimation, the success hinges on whether feature points can be extracted and matched correctly on target objects with rich texture. In this work, we propose a two-step robust direct method for six-dimensional pose estimation that performs accurately on both textured and textureless planar target objects. First, the pose of a planar target object with respect to a calibrated camera is approximately estimated by posing it as a template matching problem. Second, each object pose is refined and disambiguated using a dense alignment scheme. Extensive experiments on both synthetic and real datasets demonstrate that the proposed direct pose estimation algorithm performs favorably against state-of-the-art feature-based approaches in terms of robustness and accuracy under varying conditions. Furthermore, we show that the proposed dense alignment scheme can also be used for accurate pose tracking in video sequences.",
"title": ""
},
{
"docid": "fd4bddf9a5ff3c3b8577c46249bec915",
"text": "In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars the balanced parenthesis language, 1 n0n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.",
"title": ""
},
{
"docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
},
{
"docid": "9818399b4c119b58723c59e76bbfc1bd",
"text": "Many vertex-centric graph algorithms can be expressed using asynchronous parallelism by relaxing certain read-after-write data dependences and allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithms to converge. Although by using bounded staleness we can restrict the slowdown in the rate of convergence, this also restricts the ability to tolerate communication latency. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. This is achieved via a coordinated use of best effort refresh policy and bounded staleness. We demonstrate that for a range of asynchronous graph algorithms and PDE solvers, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach frequently outperforms GraphLab, a popular distributed graph processing framework.",
"title": ""
}
] |
scidocsrr
|
39869e478878d271a5e967c62470053e
|
High-Performance OCR for Printed English and Fraktur Using LSTM Networks
|
[
{
"docid": "a4a809852b08a7f0a83fc97fcd9b0b9d",
"text": "This paper proposes the use of hybrid Hidden Markov Model (HMM)/Artificial Neural Network (ANN) models for recognizing unconstrained offline handwritten texts. The structural part of the optical models has been modeled with Markov chains, and a Multilayer Perceptron is used to estimate the emission probabilities. This paper also presents new techniques to remove slope and slant from handwritten text and to normalize the size of text images with supervised learning methods. Slope correction and size normalization are achieved by classifying local extrema of text contours with Multilayer Perceptrons. Slant is also removed in a nonuniform way by using Artificial Neural Networks. Experiments have been conducted on offline handwritten text lines from the IAM database, and the recognition rates achieved, in comparison to the ones reported in the literature, are among the best for the same task.",
"title": ""
},
{
"docid": "9be0134d63fbe6978d786280fb133793",
"text": "Yann Le Cun AT&T Bell Labs Holmdel NJ 07733 We introduce a new approach for on-line recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution \"annotated images\" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors.",
"title": ""
}
] |
[
{
"docid": "104148028f4d0e2775274ef7d2e8b2ed",
"text": "Funneling and saltation are two major illusory feedback techniques for vibration-based tactile feedback. They are often put into practice e.g. to reduce the number of vibrators to be worn on the body and thereby build a less cumbersome feedback device. Recently, these techniques have been found to be applicable to eliciting \"out of the body\" experiences as well (e.g. through user-held external objects). This paper examines the possibility of applying this phenomenon to interacting with virtual objects. Two usability experiments were run to test the effects of funneling and saltation respectively for perceiving tactile sensation from a virtual object in an augmented reality setting. Experimental results have shown solid evidences for phantom sensations from virtual objects with funneling, but mixed results for saltation.",
"title": ""
},
{
"docid": "b9d12a2c121823a81902375f6be893bb",
"text": "Internet users are often victimized by malicious attackers. Some attackers infect and use innocent users’ machines to launch large-scale attacks without the users’ knowledge. One of such attacks is the click-fraud attack. Click-fraud happens in Pay-Per-Click (PPC) ad networks where the ad network charges advertisers for every click on their ads. Click-fraud has been proved to be a serious problem for the online advertisement industry. In a click-fraud attack, a user or an automated software clicks on an ad with a malicious intent and advertisers need to pay for those valueless clicks. Among many forms of click-fraud, botnets with the automated clickers are the most severe ones. In this paper, we present a method for detecting automated clickers from the user-side. The proposed method to Fight Click-Fraud, FCFraud, can be integrated into the desktop and smart device operating systems. Since most modern operating systems already provide some kind of anti-malware service, our proposed method can be implemented as a part of the service. We believe that an effective protection at the operating system level can save billions of dollars of the advertisers. Experiments show that FCFraud is 99.6% (98.2% in mobile ad library generated traffic) accurate in classifying ad requests from all user processes and it is 100% successful in detecting clickbots in both desktop and mobile devices. We implement a cloud backend for the FCFraud service to save battery power in mobile devices. The overhead of executing FCFraud is also analyzed and we show that it is reasonable for both the platforms. Copyright c © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "f1d0f218b789ac104448777c82a4093f",
"text": "This paper critically reviews the literature on managing diversity through human resource management (HRM). We discuss the major issues and objectives of managing diversity and examine the state of human resource diversity management practices in organizations. Our review shows that inequality and discrimination still widely exist and HRM has focused mainly on compliance with equal employment opportunity (EEO) and affirmative action (AA) legislation. Less attention has been paid to valuing, developing and making use of diversity. Our review reveals limited literature examining how diversity is managed in organizations through effective human resource management. We develop a framework that presents strategies for HR diversity management at the strategic, tactical and operational levels. Our review also discusses the implications for practice and further research.",
"title": ""
},
{
"docid": "0b82b60a19e1895ffa103ebb23ec9920",
"text": "Logistics and supply chain management is an area that evolved deeply in the past years, integrating developments of other areas of knowledge, both entrepreneurial and general. In this paper, a perspective of the evolution of logistics and supply chain management is somehow designed. Traditionally, one may find logistics and supply chain management in friction with marketing and claiming for its own space. Nowadays, it seems difficult to see internal (logistics) versus external (marketing) wars and different orientations between marketing and logistics because they are both service and relations oriented. Simple transactions have been substituted, long time ago, for sustainable relations in the area of logistics and supply chain management. Finally, a more service oriented logic has been the footprint of logistics and supply chain management in current days and not, as pretended for some current rows of investigation, a simple transaction approach under a goods dominant logic. Logistics and supply chain management is nowadays in parallel with an S-D logic (service dominant logic) because it is an area where relations matter, where sustainable links between networks of companies are crucial and where service is key in order to accommodate the contemporary thoughts and practices in the area. The main purpose of the paper is to stress the point that logistics and supply chain management is an area of service and value creation (or co-creation) and not a simple area of goods exchange and simple transactions.",
"title": ""
},
{
"docid": "8b6b970a179eb2b357dace2b6e55d5d6",
"text": "Unmanned aerial vehicles (UAVs) have been recently considered as means to provide enhanced coverage or relaying services to mobile users (MUs) in wireless systems with limited or no infrastructure. In this paper, a UAV-based mobile cloud computing system is studied in which a moving UAV is endowed with computing capabilities to offer computation offloading opportunities to MUs with limited local processing capabilities. The system aims at minimizing the total mobile energy consumption while satisfying quality of service requirements of the offloaded mobile application. Offloading is enabled by uplink and downlink communications between the mobile devices and the UAV, which take place by means of frequency division duplex via orthogonal or nonorthogonal multiple access schemes. The problem of jointly optimizing the bit allocation for uplink and downlink communications as well as for computing at the UAV, along with the cloudlet's trajectory under latency and UAV's energy budget constraints is formulated and addressed by leveraging successive convex approximation strategies. Numerical results demonstrate the significant energy savings that can be accrued by means of the proposed joint optimization of bit allocation and cloudlet's trajectory as compared to local mobile execution as well as to partial optimization approaches that design only the bit allocation or the cloudlet's trajectory.",
"title": ""
},
{
"docid": "991a8c7011548af52367e426ba9beed6",
"text": "Dihydrogen, methane, and carbon dioxide isotherm measurements were performed at 1-85 bar and 77-298 K on the evacuated forms of seven porous covalent organic frameworks (COFs). The uptake behavior and capacity of the COFs is best described by classifying them into three groups based on their structural dimensions and corresponding pore sizes. Group 1 consists of 2D structures with 1D small pores (9 A for each of COF-1 and COF-6), group 2 includes 2D structures with large 1D pores (27, 16, and 32 A for COF-5, COF-8, and COF-10, respectively), and group 3 is comprised of 3D structures with 3D medium-sized pores (12 A for each of COF-102 and COF-103). Group 3 COFs outperform group 1 and 2 COFs, and rival the best metal-organic frameworks and other porous materials in their uptake capacities. This is exemplified by the excess gas uptake of COF-102 at 35 bar (72 mg g(-1) at 77 K for hydrogen, 187 mg g(-1) at 298 K for methane, and 1180 mg g(-1) at 298 K for carbon dioxide), which is similar to the performance of COF-103 but higher than those observed for COF-1, COF-5, COF-6, COF-8, and COF-10 (hydrogen at 77 K, 15 mg g(-1) for COF-1, 36 mg g(-1) for COF-5, 23 mg g(-1) for COF-6, 35 mg g(-1) for COF-8, and 39 mg g(-1) for COF-10; methane at 298 K, 40 mg g(-1) for COF-1, 89 mg g(-1) for COF-5, 65 mg g(-1) for COF-6, 87 mg g(-1) for COF-8, and 80 mg g(-1) for COF-10; carbon dioxide at 298 K, 210 mg g(-1) for COF-1, 779 mg g(-1) for COF-5, 298 mg g(-1) for COF-6, 598 mg g(-1) for COF-8, and 759 mg g(-1) for COF-10). These findings place COFs among the most porous and the best adsorbents for hydrogen, methane, and carbon dioxide.",
"title": ""
},
{
"docid": "1adc476c1e322d7cc7a0c93e726a8e2c",
"text": "A wireless body area network is a radio-frequency- based wireless networking technology that interconnects tiny nodes with sensor or actuator capabilities in, on, or around a human body. In a civilian networking environment, WBANs provide ubiquitous networking functionalities for applications varying from healthcare to safeguarding of uniformed personnel. This article surveys pioneer WBAN research projects and enabling technologies. It explores application scenarios, sensor/actuator devices, radio systems, and interconnection of WBANs to provide perspective on the trade-offs between data rate, power consumption, and network coverage. Finally, a number of open research issues are discussed.",
"title": ""
},
{
"docid": "fb1f3f300bcd48d99f0a553a709fdc89",
"text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.",
"title": ""
},
{
"docid": "9ea0bcfd712f3d8ef3011848eb36515e",
"text": "Most devices deployed in the Internet of Things (IoT) are expected to suffer from resource constraints. Using specialized tools on such devices for monitoring IoT networks would take away precious resources that could otherwise be dedicated towards their primary task. In many IoT applications such as Advanced Metering Infrastructure (AMI) networks, higher order devices are expected to form the backbone infrastructure, to which the constrained nodes would connect. It would, as such, make sense to exploit the capabilities of these higher order devices to perform network monitoring tasks. We propose in this paper a distributed monitoring architecture that takes benefits from specificities of the IoT routing protocol RPL to passively monitor events and network flows without having impact upon the resource constrained nodes. We describe the underlying mechanisms of this architecture, quantify its performances through a set of experiments using the Cooja environment. We also evaluate its benefits and limits through a use case scenario dedicated to anomaly detection.",
"title": ""
},
{
"docid": "dc4635953c8212ba51bf6ad3e98494b4",
"text": "A new class of wideband multisection 180deg hybrid rings using the vertically installed planar (VIP) coupler is proposed. On the basis of the reconfigured ideal single-section 180deg hybrid ring (i.e., the 180deg hybrid ring with an ideal phase inverter), the multisection 180deg hybrid rings can be realized by properly cascading of single-section 180deg hybrid rings. Compared with the conventional hybrid ring, the two-section hybrid rings exhibit wide bandwidth, size reduction, and easily achievable high power-division ratios. Design equations based on the equal-ripple functions are derived. Design curves for the equal and unequal power-division ratio are also described. In addition, a cascade of single-section cascadable hybrid rings with a unit element at each I/O port can be used for bandwidth enhancement. Good agreement is obtained between the experimental and simulated results",
"title": ""
},
{
"docid": "429f6a87ceebf0bd2b852c1a1ab91eb2",
"text": "BACKGROUND\nIn some countries extracts of the plant Hypericum perforatum L. (popularly called St. John's wort) are widely used for treating patients with depressive symptoms.\n\n\nOBJECTIVES\nTo investigate whether extracts of hypericum are more effective than placebo and as effective as standard antidepressants in the treatment of major depression; and whether they have fewer adverse effects than standard antidepressant drugs.\n\n\nSEARCH STRATEGY\nTrials were searched in computerised databases, by checking bibliographies of relevant articles, and by contacting manufacturers and researchers.\n\n\nSELECTION CRITERIA\nTrials were included if they: (1) were randomised and double-blind; (2) included patients with major depression; (3) compared extracts of St. John's wort with placebo or standard antidepressants; (4) included clinical outcomes assessing depressive symptoms.\n\n\nDATA COLLECTION AND ANALYSIS\nAt least two independent reviewers extracted information from study reports. The main outcome measure for assessing effectiveness was the responder rate ratio (the relative risk of having a response to treatment). The main outcome measure for adverse effects was the number of patients dropping out due to adverse effects.\n\n\nMAIN RESULTS\nA total of 29 trials (5489 patients) including 18 comparisons with placebo and 17 comparisons with synthetic standard antidepressants met the inclusion criteria. Results of placebo-controlled trials showed marked heterogeneity. In nine larger trials the combined response rate ratio (RR) for hypericum extracts compared with placebo was 1.28 (95% confidence interval (CI), 1.10 to 1.49) and from nine smaller trials was 1.87 (95% CI, 1.22 to 2.87). Results of trials comparing hypericum extracts and standard antidepressants were statistically homogeneous. Compared with tri- or tetracyclic antidepressants and selective serotonin reuptake inhibitors (SSRIs), respectively, RRs were 1.02 (95% CI, 0.90 to 1.15; 5 trials) and 1.00 (95% CI, 0.90 to 1.11; 12 trials). Both in placebo-controlled trials and in comparisons with standard antidepressants, trials from German-speaking countries reported findings more favourable to hypericum. Patients given hypericum extracts dropped out of trials due to adverse effects less frequently than those given older antidepressants (odds ratio (OR) 0.24; 95% CI, 0.13 to 0.46) or SSRIs (OR 0.53, 95% CI, 0.34-0.83).\n\n\nAUTHORS' CONCLUSIONS\nThe available evidence suggests that the hypericum extracts tested in the included trials a) are superior to placebo in patients with major depression; b) are similarly effective as standard antidepressants; c) and have fewer side effects than standard antidepressants. The association of country of origin and precision with effects sizes complicates the interpretation.",
"title": ""
},
{
"docid": "47d673d7b917f3948274f1e32a847a35",
"text": "Real-time lane detection and tracking is one of the most reliable approaches to prevent road accidents by alarming the driver of the excessive lane changes. This paper addresses the problem of correct lane detection and tracking of the current lane of a vehicle in real-time. We propose a solution that is computationally efficient and performs better than previous approaches. The proposed algorithm is based on detecting straight lines from the captured road image, marking a region of interest, filtering road marks and detecting the current lane by using the information gathered. This information is obtained by analyzing the geometric shape of the lane boundaries and the convergence point of the lane markers. To provide a feasible solution, the only sensing modality on which the algorithm depends on is the camera of an off-the-shelf mobile device. The proposed algorithm has a higher average accuracy of 96.87% when tested on the Caltech Lanes Dataset as opposed to the state-of-the-art technology for lane detection. The algorithm operates on three frames per second on a 2.26 GHz quad-core processor of a mobile device with an image resolution of 640×480 pixels. It is tested and verified under various visibility and road conditions.",
"title": ""
},
{
"docid": "2a8c3676233cf1ae61fe91a7af3873d9",
"text": "Rumination has attracted increasing theoretical and empirical interest in the past 15 years. Previous research has demonstrated significant relationships between rumination, depression, and metacognition. Two studies were conducted to further investigate these relationships and test the fit of a clinical metacognitive model of rumination and depression in samples of both depressed and nondepressed participants. In these studies, we collected cross-sectional data of rumination, depression, and metacognition. The relationships among variables were examined by testing the fit of structural equation models. In the study on depressed participants, a good model fit was obtained consistent with predictions. There were similarities and differences between the depressed and nondepressed samples in terms of relationships among metacognition, rumination, and depression. In each case, theoretically consistent paths between positive metacognitive beliefs, rumination, negative metacognitive beliefs, and depression were evident. The conceptual and clinical implications of these data are discussed.",
"title": ""
},
{
"docid": "b55181a8fa2b0a3ffe0ed02ef44f3b63",
"text": "This article introduces the study contents and some research findings regarding digital preservation methods for Chinese Kunqu opera libretto historical literature, including historical literature electronic libretto transformation, libretto musical score image segmentation, musical information recognition, musical score information representation, musical score information storage, and libretto reconstruction on the Web. It proposes a novel editable text method to represent the multidimensional tree-like information structure of the Kunqu libretto literature and a musical semantic annotation method based on numbered musical notation to accommodate the musical features of Kunqu librettos. To maintain the characteristics of the original Kunqu musical notation, it proposes a method to reconstruct Kunqu libretto on the Web based on scalable vector graphics. Some Kunqu librettos were randomly selected for experiments, and the results demonstrated that the editable text method and the musical semantic annotation method were able to fully represent the effective information of the Kunqu libretto literature and that the method to reconstruct librettos on the Web was able to reflect the writing characteristics of the musical notation in the original librettos. Finally, it discusses the primary future research directions related to digital Kunqu, including Kunqu libretto metadata research, corpus construction for the librettos and Qupai (the unique ancient Chinese tune mode), libretto music information disambiguation research, libretto image segmentation and pattern recognition, digital Kunqu roles, digital Kunqu stages, digital Kunqu costume suitcases, virtual Kunqu, digitization and restoration of Kunqu cultural relics, and Kunqu's application prospects in conventional media such as animation, anime, and movies.",
"title": ""
},
{
"docid": "7c1fd4f8978e012ed00249271ed8c0cf",
"text": "Graph clustering aims to discovercommunity structures in networks, the task being fundamentally challenging mainly because the topology structure and the content of the graphs are difficult to represent for clustering analysis. Recently, graph clustering has moved from traditional shallow methods to deep learning approaches, thanks to the unique feature representation learning capability of deep learning. However, existing deep approaches for graph clustering can only exploit the structure information, while ignoring the content information associated with the nodes in a graph. In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering. The key innovation of MGAE is that it advances the autoencoder to the graph domain, so graph representation learning can be carried out not only in a purely unsupervised setting by leveraging structure and content information, it can also be stacked in a deep fashion to learn effective representation. From a technical viewpoint, we propose a marginalized graph convolutional network to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations. The learned features are fed into the spectral clustering algorithm for graph clustering. Experimental results on benchmark datasets demonstrate the superior performance of MGAE, compared to numerous baselines.",
"title": ""
},
{
"docid": "76e7f63fa41d6d457e6e4386ad7b9896",
"text": "A growing body of work has highlighted the challenges of identifying the stance that a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts from the debate website ConvinceMe.net, for 14 topics ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for classifying stance on a per topic basis that range from 60% to 75%, as compared to unigram baselines that vary between 47% and 66%. Our results suggest that features and methods that take into account the dialogic context of such posts improve accuracy.",
"title": ""
},
{
"docid": "210e9bc5f2312ca49438e6209ecac62e",
"text": "Image classification has become one of the main tasks in the field of computer vision technologies. In this context, a recent algorithm called CapsNet that implements an approach based on activity vectors and dynamic routing between capsules may overcome some of the limitations of the current state of the art artificial neural networks (ANN) classifiers, such as convolutional neural networks (CNN). In this paper, we evaluated the performance of the CapsNet algorithm in comparison with three well-known classifiers (Fisherfaces, LeNet, and ResNet). We tested the classification accuracy on four datasets with a different number of instances and classes, including images of faces, traffic signs, and everyday objects. The evaluation results show that even for simple architectures, training the CapsNet algorithm requires significant computational resources and its classification performance falls below the average accuracy values of the other three classifiers. However, we argue that CapsNet seems to be a promising new technique for image classification, and further experiments using more robust computation resources and refined CapsNet architectures may produce better outcomes.",
"title": ""
},
{
"docid": "046f6c5cc6065c1cb219095fb0dfc06f",
"text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.",
"title": ""
},
{
"docid": "2b952c455c9f8daa7f6c0c024620aef8",
"text": "Broadband use is booming around the globe as the infrastructure is built to provide high speed Internet and Internet Protocol television (IPTV) services. Driven by fierce competition and the search for increasing average revenue per user (ARPU), operators are evolving so they can deliver services within the home that involve a wide range of technologies, terminals, and appliances, as well as software that is increasingly rich and complex. “It should all work” is the key theme on the end user's mind, yet call centers are confronted with a multitude of consumer problems. The demarcation point between provider network and home network is blurring, in fact, if not yet in the consumer's mind. In this context, operators need to significantly rethink service lifecycle management. This paper explains how home and access support systems cover the most critical part of the network in service delivery. They build upon the inherent operation support features of access multiplexers, network termination devices, and home devices to allow the planning, fulfillment, operation, and assurance of new services.",
"title": ""
},
{
"docid": "8c1e70cf4173f9fc48f36c3e94216f15",
"text": "Deep learning methods often require large annotated data sets to estimate their high numbers of parameters, which is not practical for many robotic domains. One way to migitate this issue is to transfer features learned on large datasets to related tasks. In this work, we describe the perception system developed for the entry of team NimbRo Picking into the Amazon Picking Challenge 2016. Object detection and semantic Segmentation methods are adapted to the domain, including incorporation of depth measurements. To avoid the need for large training datasets, we make use of pretrained models whenever possible, e.g. CNNs pretrained on ImageNet, and the whole DenseCap captioning pipeline pretrained on the Visual Genome Dataset. Our system performed well at the APC 2016 and reached second and third places for the stow and pick tasks, respectively.",
"title": ""
}
] |
scidocsrr
|
5d85716d40d4b1d5f191dd594f9b470b
|
A Dynamic Processor Allocation Policy for Multiprogrammed Shared-memory Multiprocessors
|
[
{
"docid": "829b910e2c73ee15866fc59de4884200",
"text": "Shared-memory multiprocessors are frequently used as compute servers with multiple parallel applications executing at the same time. In such environments, the efficiency of a parallel application can be significantly affected by the operating system scheduling policy. In this paper, we use detailed simulation studies to evaluate the performance of several different scheduling strategies, These include regular priority scheduling, coscheduling or gang scheduling, process control with processor partitioning, handoff scheduling, and affinity-based scheduling. We also explore tradeoffs between the use of busy-waiting and blocking synchronization primitives and their interactions with the scheduling strategies. Since effective use of caches is essential to achieving high performance, a key focus is on the impact of the scheduling strategies on the caching behavior of the applications.Our results show that in situations where the number of processes exceeds the number of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization. In such situations, use of blocking synchronization primitives can significantly improve performance. Process control and gang scheduling strategies are shown to offer the highest performance, and their performance is relatively independent of the synchronization method used. However, for applications that have sizable working sets that fit into the cache, process control performs better than gang scheduling. For the applications considered, the performance gains due to handoff scheduling and processor affinity are shown to be small.",
"title": ""
}
] |
[
{
"docid": "235e6e4537e9f336bf80e6d648fdc8fb",
"text": "Communication between the deaf and non-deaf has always been a very cumbersome task. This paper aims to cover the various prevailing methods of deaf-mute communication interpreter system. The two broad classification of the communication methodologies used by the deaf –mute people are Wearable Communication Device and Online Learning System. Under Wearable communication method, there are Glove based system, Keypad method and Handicom Touchscreen. All the above mentioned three sub-divided methods make use of various sensors, accelerometer, a suitable microcontroller, a text to speech conversion module, a keypad and a touch-screen. The need for an external device to interpret the message between a deaf –mute and non-deaf-mute people can be overcome by the second method i.e online learning system. The Online Learning System has different methods under it, five of which are explained in this paper. The five sub-divided methods areSLIM module, TESSA, Wi-See Technology, SWI_PELE System and Web-Sign Technology. The working of the individual components used and the operation of the whole system for the communication purpose has been explained in detail in this paper.",
"title": ""
},
{
"docid": "b3bc34cfbe6729f7ce540a792c32bf4c",
"text": "The employment of MIMO OFDM technique constitutes a cost effective approach to high throughput wireless communications. The system performance is sensitive to frequency offset which increases with the doppler spread and causes Intercarrier interference (ICI). ICI is a major concern in the design as it can potentially cause a severe deterioration of quality of service (QoS) which necessitates the need for a high speed data detection and decoding with ICI cancellation along with the intersymbol interference (ISI) cancellation in MIMO OFDM communication systems. Iterative parallel interference canceller (PIC) with joint detection and decoding is a promising approach which is used in this work. The receiver consists of a two stage interference canceller. The co channel interference cancellation is performed based on Zero Forcing (ZF) Detection method used to suppress the effect of ISI in the first stage. The latter stage consists of a simplified PIC scheme. High bit error rates of wireless communication system require employing forward error correction (FEC) methods on the data transferred in order to avoid burst errors that occur in physical channel. To achieve high capacity with minimum error rate Low Density Parity Check (LDPC) codes which have recently drawn much attention because of their error correction performance is used in this system. The system performance is analyzed for two different values of normalized doppler shift for varying speeds. The bit error rate (BER) is shown to improve in every iteration due to the ICI cancellation. The interference analysis with the use of ICI cancellation is examined for a range of normalized doppler shift which corresponds to mobile speeds varying from 5Km/hr to 250Km/hr.",
"title": ""
},
{
"docid": "80d1237fff963ebf4bcc5fab67c68f4e",
"text": "Researchers have studied whether some youth are \"addicted\" to video games, but previous studies have been based on regional convenience samples. Using a national sample, this study gathered information about video-gaming habits and parental involvement in gaming, to determine the percentage of youth who meet clinical-style criteria for pathological gaming. A Harris poll surveyed a randomly selected sample of 1,178 American youth ages 8 to 18. About 8% of video-game players in this sample exhibited pathological patterns of play. Several indicators documented convergent and divergent validity of the results: Pathological gamers spent twice as much time playing as nonpathological gamers and received poorer grades in school; pathological gaming also showed comorbidity with attention problems. Pathological status significantly predicted poorer school performance even after controlling for sex, age, and weekly amount of video-game play. These results confirm that pathological gaming can be measured reliably, that the construct demonstrates validity, and that it is not simply isomorphic with a high amount of play.",
"title": ""
},
{
"docid": "2f48b326aaa7b41a7ee347cedce344ed",
"text": "In this paper a new kind of quasi-quartic trigonometric polynomial base functions with two shape parameters λ and μ over the space Ω = span {1, sin t, cos t, sin2t, cos2t, sin3t, cos3t} is presented and the corresponding quasi-quartic trigonometric Bézier curves and surfaces are defined by the introduced base functions. Each curve segment is generated by five consecutive control points. The shape of the curve can be adjusted by altering the values of shape parameters while the control polygon is kept unchanged. These curves inherit most properties of the usual quartic Bézier curves in the polynomial space and they can be used as an efficient new model for geometric design in the fields of CAGD.",
"title": ""
},
{
"docid": "53343bc045189bf7578619e7d60a36ba",
"text": "Financial technology (FinTech) is the new business model and technology which aims to compete with traditional financial services and blockchain is one of most famous technology use of FinTech. Blockchain is a type of distributed, electronic database (ledger) which can hold any information (e.g. records, events, transactions) and can set rules on how this information is updated. The most well-known application of blockchain is bitcoin, which is a kind of cryptocurrencies. But it can also be used in many other financial and commercial applications. A prominent example is smart contracts, for instance as offered in Ethereum. A contract can execute a transfer when certain events happen, such as payment of a security deposit, while the correct execution is enforced by the consensus protocol. The purpose of this paper is to explore the research and application landscape of blockchain technology acceptance by following a more comprehensive approach to address blockchain technology adoption. This research is to propose a unified model integrating Innovation Diffusion Theory (IDT) model and Technology Acceptance Model (TAM) to investigate continuance intention to adopt blockchain technology.",
"title": ""
},
{
"docid": "e7a13f146c77d52b72a691ebb6671240",
"text": "The recent diversification of telephony infrastructure allows users to communicate through landlines, mobile phones and VoIP phones. However, call metadata such as Caller-ID is either not transferred or transferred without verification across these networks, allowing attackers to maliciously alter it. In this paper, we develop PinDr0p, a mechanism to assist users in determining call provenance - the source and the path taken by a call. Our techniques detect and measure single-ended audio features to identify all of the applied voice codecs, calculate packet loss and noise profiles, while remaining agnostic to characteristics of the speaker's voice (as this may legitimately change when interacting with a large organization). In the absence of verifiable call metadata, these features in combination with machine learning allow us to determine the traversal of a call through as many as three different providers (e.g., cellular, then VoIP, then PSTN and all combinations and subsets thereof) with 91.6% accuracy. Moreover, we show that once we identify and characterize the networks traversed, we can create detailed fingerprints for a call source. Using these fingerprints we show that we are able to distinguish between calls made using specific PSTN, cellular, Vonage, Skype and other hard and soft phones from locations across the world with over 90% accuracy. In so doing, we provide a first step in accurately determining the provenance of a call.",
"title": ""
},
{
"docid": "9f8314b5cc0c480d6c596efcc1875d3b",
"text": "Machine learning and computer vision have driven many of the greatest advances in the modeling of Deep Convolutional Neural Networks (DCNNs). Nowadays, most of the research has been focused on improving recognition accuracy with better DCNN models and learning approaches. The recurrent convolutional approach is not applied very much, other than in a few DCNN architectures. On the other hand, Inception-v4 and Residual networks have promptly become popular among computer the vision community. In this paper, we introduce a new DCNN model called the Inception Recurrent Residual Convolutional Neural Network (IRRCNN), which utilizes the power of the Recurrent Convolutional Neural Network (RCNN), the Inception network, and the Residual network. This approach improves the recognition accuracy of the Inception-residual network with same number of network parameters. In addition, this proposed architecture generalizes the Inception network, the RCNN, and the Residual network with significantly improved training accuracy. We have empirically evaluated the performance of the IRRCNN model on different benchmarks including CIFAR-10, CIFAR-100, TinyImageNet-200, and CU3D-100. The experimental results show higher recognition accuracy against most of the popular DCNN models including the RCNN. We have also investigated the performance of the IRRCNN approach against the Equivalent Inception Network (EIN) and the Equivalent Inception Residual Network (EIRN) counterpart on the CIFAR-100 dataset. We report around 4.53, 4.49 and 3.56% improvement in classification accuracy compared with the RCNN, EIN, and EIRN on the CIFAR-100 dataset respectively. Furthermore, the experiment has been conducted on the TinyImageNet-200 and CU3D-100 datasets where the IRRCNN provides better testing accuracy compared to the Inception Recurrent CNN, the EIN, the EIRN, Inception-v3, and Wide Residual Networks.",
"title": ""
},
{
"docid": "36357f48cbc3ed4679c679dcb77bdd81",
"text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.",
"title": ""
},
{
"docid": "a2faba3e69563acf9e874bf4c4327b5d",
"text": "We analyze a mobile wireless link comprising M transmitter andN receiver antennas operating in a Rayleigh flat-fading environment. The propagation coef fici nts between every pair of transmitter and receiver antennas are statistically independent and un known; they remain constant for a coherence interval ofT symbol periods, after which they change to new independent v alues which they maintain for anotherT symbol periods, and so on. Computing the link capacity, associated with channel codin g over multiple fading intervals, requires an optimization over the joint density of T M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater t han the length of the coherence interval: the capacity forM > T is equal to the capacity for M = T . Capacity is achieved when the T M transmitted signal matrix is equal to the product of two stat i ically independent matrices: a T T isotropically distributed unitary matrix times a certain T M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity f or many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence i nterval increases, the capacity approaches the capacity obtained as if the receiver knew the propagatio n coefficients. Index Terms —Multi-element antenna arrays, wireless communications, space-time modulation",
"title": ""
},
{
"docid": "74ef9ec31d4799845765c7752f95720d",
"text": "With the rapid growth of social networks and microblogging websites, communication between people from different cultural and psychological backgrounds has become more direct, resulting in more and more “cyber” conflicts between these people. Consequently, hate speech is used more and more, to the point where it has become a serious problem invading these open spaces. Hate speech refers to the use of aggressive, violent or offensive language, targeting a specific group of people sharing a common property, whether this property is their gender (i.e., sexism), their ethnic group or race (i.e., racism) or their believes and religion. While most of the online social networks and microblogging websites forbid the use of hate speech, the size of these networks and websites makes it almost impossible to control all of their content. Therefore, arises the necessity to detect such speech automatically and filter any content that presents hateful language or language inciting to hatred. In this paper, we propose an approach to detect hate expressions on Twitter. Our approach is based on unigrams and patterns that are automatically collected from the training set. These patterns and unigrams are later used, among others, as features to train a machine learning algorithm. Our experiments on a test set composed of 2010 tweets show that our approach reaches an accuracy equal to 87.4% on detecting whether a tweet is offensive or not (binary classification), and an accuracy equal to 78.4% on detecting whether a tweet is hateful, offensive, or clean (ternary classification).",
"title": ""
},
{
"docid": "a66765e24b6cfdab2cc0b30de8afd12e",
"text": "A broadband transition structure from rectangular waveguide (RWG) to microstrip line (MSL) is presented for the realization of the low-loss packaging module using Low-temperature co-fired ceramic (LTCC) technology at W-band. In this transition, a cavity structure is buried in LTCC layers, which provides the wide bandwidth, and a laminated waveguide (LWG) transition is designed, which provides the low-loss performance, as it reduces the radiation loss of conventional direct transition between RWG and MSL. The design procedure is also given. The measured results show that the insertion loss of better than 0.7 dB from 86 to 97 GHz can be achieved.",
"title": ""
},
{
"docid": "836a5f20cc1765e664e0d4386735efdb",
"text": "Although a software application always executes within a particular environment, current testing methods have largely ignored these environmental factors. Many applications execute in an environment that contains a database. In this paper, we propose a family of test adequacy criteria that can be used to assess the quality of test suites for database-driven applications. Our test adequacy criteria use dataflow information that is associated with the entities in a relational database. Furthermore, we develop a unique representation of a database-driven application that facilitates the enumeration of database interaction associations. These associations can reflect an application's definition and use of database entities at multiple levels of granularity. The usage of a tool to calculate intraprocedural database interaction associations for two case study applications indicates that our adequacy criteria can be computed with an acceptable time and space overhead.",
"title": ""
},
{
"docid": "1d195fb4df8375772674d0852a046548",
"text": "All existing image enhancement methods, such as HDR tone mapping, cannot recover A/D quantization losses due to insufficient or excessive lighting, (underflow and overflow problems). The loss of image details due to A/D quantization is complete and it cannot be recovered by traditional image processing methods, but the modern data-driven machine learning approach offers a much needed cure to the problem. In this work we propose a novel approach to restore and enhance images acquired in low and uneven lighting. First, the ill illumination is algorithmically compensated by emulating the effects of artificial supplementary lighting. Then a DCNN trained using only synthetic data recovers the missing detail caused by quantization.",
"title": ""
},
{
"docid": "e6ca2014fa8b6717c1159baa39cd8b8e",
"text": "The ability to walk contributes considerably to physical health and overall well-being, particularly in children with motor disability, and is therefore prioritized as a rehabilitation goal. However, half of ambulatory children with cerebral palsy (CP), the most prevalent childhood movement disorder, cease to walk in adulthood. Robotic gait trainers have shown positive outcomes in initial studies, but these clinic-based systems are limited to short-term programs of insufficient length to maintain improved function in a lifelong disability such as CP. Sophisticated wearable exoskeletons are now available, but their utility in treating childhood movement disorders remains unknown. We evaluated an exoskeleton for the treatment of crouch (or flexed-knee) gait, one of the most debilitating pathologies in CP. We show that the exoskeleton reduced crouch in a cohort of ambulatory children with CP during overground walking. The exoskeleton was safe and well tolerated, and all children were able to walk independently with the device. Rather than guiding the lower limbs, the exoskeleton dynamically changed the posture by introducing bursts of knee extension assistance during discrete portions of the walking cycle, a perturbation that resulted in maintained or increased knee extensor muscle activity during exoskeleton use. Six of seven participants exhibited postural improvements equivalent to outcomes reported from invasive orthopedic surgery. We also demonstrate that improvements in crouch increased over the course of our multiweek exploratory trial. Together, these results provide evidence supporting the use of wearable exoskeletons as a treatment strategy to improve walking in children with CP.",
"title": ""
},
{
"docid": "d5b20e250e28cae54a7f3c868f342fc5",
"text": "Context: Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning. Software clones may lead to bug propagation and serious maintenance problems. Objective: This study reports an extensive systematic literature review of software clones in general and software clone detection in particular. Method: We used the standard systematic literature review method based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and",
"title": ""
},
{
"docid": "9e8650a5375a679452948c47504881a8",
"text": "Given a graph G we define the betweenness centrality of a node v in V as the fraction of shortest paths between all node pairs in V that contain v. For this setting we describe Brandes++, a divide-and-conquer algorithm that can efficiently compute the exact values of betweenness scores. Brandes++ uses Brandes– the most widelyused algorithm for betweenness computation – as its subroutine. It achieves the notable faster running times by applying Brandes on significantly smaller networks than the input graph, and many of its computations can be done in parallel. The degree of speedup achieved by Brandes++ depends on the community structure of the input network. Our experiments with real-life networks reveal Brandes++ achieves an average of 10-fold speedup over Brandes, while there are networks where this speedup is 75-fold. We have made our code public to benefit the research community.",
"title": ""
},
{
"docid": "cc8b0cd938bc6315864925a7a057e211",
"text": "Despite the continuous growth in the number of smartphones around the globe, Short Message Service (SMS) still remains as one of the most popular, cheap and accessible ways of exchanging text messages using mobile phones. Nevertheless, the lack of security in SMS prevents its wide usage in sensitive contexts such as banking and health-related applications. Aiming to tackle this issue, this paper presents SMSCrypto, a framework for securing SMS-based communications in mobile phones. SMSCrypto encloses a tailored selection of lightweight cryptographic algorithms and protocols, providing encryption, authentication and signature services. The proposed framework is implemented both in Java (target at JVM-enabled platforms) and in C (for constrained SIM Card processors) languages, thus being suitable",
"title": ""
},
{
"docid": "d5a1901a046763c7d6cf5a09b8838caf",
"text": "Distributional similarity is a classic technique for entity set expansion, where the system is given a set of seed entities of a particular class, and is asked to expand the set using a corpus to obtain more entities of the same class as represented by the seeds. This paper shows that a machine learning model called positive and unlabeled learning (PU learning) can model the set expansion problem better. Based on the test results of 10 corpora, we show that a PU learning technique outperformed distributional similarity significantly.",
"title": ""
},
{
"docid": "101c03b85e3cc8518a158d89cc9b3b39",
"text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.",
"title": ""
},
{
"docid": "7a1a9ed8e9a6206c3eaf20da0c156c14",
"text": "Formal modeling rules can be used to ensure that an enterprise architecture is correct. Despite their apparent utility and despite mature tool support, formal modelling rules are rarely, if ever, used in practice in enterprise architecture in industry. In this paper we propose a rule authoring method that we believe aligns with actual modelling practice, at least as witnessed in enterprise architecture projects at the Swedish Defence Materiel Administration. The proposed method follows the business rules approach: the rules are specified in a (controlled) natural language which makes them accessible to all stakeholders and easy to modify as the meta-model matures and evolves over time. The method was put to test during 2014 in two large scale enterprise architecture projects, and we report on the experiences from that. To the best of our knowledge, this is the first time extensive formal modelling rules for enterprise architecture has been tested in industry and reported in the",
"title": ""
}
] |
scidocsrr
|
b8d6292b10b684f88c40f1d142d71b08
|
On cognitive small cells in two-tier heterogeneous networks
|
[
{
"docid": "804139352206af823bc8bae12789c416",
"text": "In a two-tier heterogeneous network (HetNet) where femto access points (FAPs) with lower transmission power coexist with macro base stations (BSs) with higher transmission power, the FAPs may suffer significant performance degradation due to inter-tier interference. Introducing cognition into the FAPs through the spectrum sensing (or carrier sensing) capability helps them avoiding severe interference from the macro BSs and enhance their performance. In this paper, we use stochastic geometry to model and analyze performance of HetNets composed of macro BSs and cognitive FAPs in a multichannel environment. The proposed model explicitly accounts for the spatial distribution of the macro BSs, FAPs, and users in a Rayleigh fading environment. We quantify the performance gain in outage probability obtained by introducing cognition into the femto-tier, provide design guidelines, and show the existence of an optimal spectrum sensing threshold for the cognitive FAPs, which depends on the HetNet parameters. We also show that looking into the overall performance of the HetNets is quite misleading in the scenarios where the majority of users are served by the macro BSs. Therefore, the performance of femto-tier needs to be explicitly accounted for and optimized.",
"title": ""
}
] |
[
{
"docid": "bd19395492dfbecd58f5cfd56b0d00a7",
"text": "The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.",
"title": ""
},
{
"docid": "c071d5a7ff1dbfd775e9ffdee1b07662",
"text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.",
"title": ""
},
{
"docid": "70ef01e33f48a52455141c3fa9130b01",
"text": "The Physical Appearance Comparison Scale (PACS; Thompson, Heinberg, & Tantleff, 1991) was revised to assess appearance comparisons relevant to women and men in a wide variety of contexts. The revised scale (Physical Appearance Comparison Scale-Revised, PACS-R) was administered to 1176 college females. In Study 1, exploratory factor analysis and parallel analysis using one half of the sample suggested a single factor structure for the PACS-R. Study 2 utilized the remaining half of the sample to conduct confirmatory factor analysis, item analysis, and to examine the convergent validity of the scale. These analyses resulted in an 11-item measure that demonstrated excellent internal consistency and convergent validity with measures of body satisfaction, eating pathology, sociocultural influences on appearance, and self-esteem. Regression analyses demonstrated the utility of the PACS-R in predicting body satisfaction and eating pathology. Overall, results indicate that the PACS-R is a reliable and valid tool for assessing appearance comparison tendencies in women.",
"title": ""
},
{
"docid": "ba203abd0bd55fc9d06fe979a604d741",
"text": "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on largescale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.",
"title": ""
},
{
"docid": "2edababb2f442f6ae93604170ef0a44b",
"text": "The aim of the research, is to examine the relationship between adolescents' five-factor personality features by use of Social Media. As for sample, there are 548 girl and 441 boy students and they are between the ages of 11-18. Adolescents’ data participating in the study, are determined by Big Five Factor personality traits Scale. Prepared data on the use of social media called \"Personal Information Form\" has been obtained by researcher. In the analysis of data, understanding of social media use times whether it differs according to big five personality traits, According to the social media using time, there was no significant difference between the agreeableness and openness subscales. On the other hand, there is a significant differences between conscientiousness, extraversion and neuroticism. In association with five personality traits of social media purpose, it was found that there are significant differences with different personality traits for each purpose.",
"title": ""
},
{
"docid": "357ff730c3d0f8faabe1fa14d4b04463",
"text": "In this paper, we propose a novel two-stage video captioning framework composed of 1) a multi-channel video encoder and 2) a sentence-generating language decoder. Both of the encoder and decoder are based on recurrent neural networks with long-short-term-memory cells. Our system can take videos of arbitrary lengths as input. Compared with the previous sequence-to-sequence video captioning frameworks, the proposed model is able to handle multiple channels of video representations and jointly learn how to combine them. The proposed model is evaluated on two large-scale movie datasets (MPII Corpus and Montreal Video Description) and one YouTube dataset (Microsoft Video Description Corpus) and achieves the state-of-the-art performances. Furthermore, we extend the proposed model towards automatic American Sign Language recognition. To evaluate the performance of our model on this novel application, a new dataset for ASL video description is collected based on YouTube videos. Results on this dataset indicate that the proposed framework on ASL recognition is promising and will significantly benefit the independent communication between ASL users and",
"title": ""
},
{
"docid": "bc4fa6a77bf0ea02456947696dc6dca3",
"text": "We propose a constraint programming approach for the optimization of inventory routing in the liquefied natural gas industry. We present two constraint programming models that rely on a disjunctive scheduling representation of the problem. We also propose an iterative search heuristic to generate good feasible solutions for these models. Computational results on a set of largescale test instances demonstrate that our approach can find better solutions than existing approaches based on mixed integer programming, while being 4 to 10 times faster on average.",
"title": ""
},
{
"docid": "ce3f7214e8ad4a29efa8c04fc8fa3a4b",
"text": "Recognition of social signals, from human facial expressions or prosody of speech, is a popular research topic in human-robot interaction studies. There is also a long line of research in the spoken dialogue community that investigates user satisfaction in relation to dialogue characteristics. However, very little research relates a combination of multimodal social signals and language features detected during spoken face-to-face human-robot interaction to the resulting user perception of a robot. In this paper we show how different emotional facial expressions of human users, in combination with prosodic characteristics of human speech and features of human-robot dialogue, correlate with users’ impressions of the robot after a conversation. We find that happiness in the user’s recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent. In addition, we show that facial expression, emotional features, and prosody are better predictors of human ratings related to perceived robot likeability and anthropomorphism, while linguistic and non-linguistic features more often predict perceived robot intelligence and interpretability. As such, these characteristics may in future be used as an online reward signal for in-situ Reinforcement Learningbased adaptive human-robot dialogue systems. Figure 1: Left: a live view of experimental setup showing a participant interacting with Pepper. Right: a diagram of experimental setup showing the participant (green) and the robot (white) positioned face to face. The scene was recorded by cameras (triangles C) from the robot’s perspective focusing on the face of the participant and from the side, showing the whole scene. The experimenter (red) was seated behind a divider.",
"title": ""
},
{
"docid": "b23db18b30963ae3b7000e75306d4c69",
"text": "State-of-the-art semantic segmentation approaches increase the receptive field of their models by using either a downsampling path composed of poolings/strided convolutions or successive dilated convolutions. However, it is not clear which operation leads to best results. In this paper, we systematically study the differences introduced by distinct receptive field enlargement methods and their impact on the performance of a novel architecture, called Fully Convolutional DenseResNet (FC-DRN). FC-DRN has a densely connected backbone composed of residual networks. Following standard image segmentation architectures, receptive field enlargement operations that change the representation level are interleaved among residual networks. This allows the model to exploit the benefits of both residual and dense connectivity patterns, namely: gradient flow, iterative refinement of representations, multi-scale feature combination and deep supervision. In order to highlight the potential of our model, we test it on the challenging CamVid urban scene understanding benchmark and make the following observations: 1) downsampling operations outperform dilations when the model is trained from scratch, 2) dilations are useful during the finetuning step of the model, 3) coarser representations require less refinement steps, and 4) ResNets (by model construction) are good regularizers, since they can reduce the model capacity when needed. Finally, we compare our architecture to alternative methods and report state-of-the-art result on the Camvid dataset, with at least twice fewer parameters.",
"title": ""
},
{
"docid": "3e06d3b5ca50bf4fcd9d354a149dd40c",
"text": "In this paper, the classification via sprepresentation and multitask learning is presented for target recognition in SAR image. To capture the characteristics of SAR image, a multidimensional generalization of the analytic signal, namely the monogenic signal, is employed. The original signal can be then orthogonally decomposed into three components: 1) local amplitude; 2) local phase; and 3) local orientation. Since the components represent the different kinds of information, it is beneficial by jointly considering them in a unifying framework. However, these components are infeasible to be directly utilized due to the high dimension and redundancy. To solve the problem, an intuitive idea is to define an augmented feature vector by concatenating the components. This strategy usually produces some information loss. To cover the shortage, this paper considers three components into different learning tasks, in which some common information can be shared. Specifically, the component-specific feature descriptor for each monogenic component is produced first. Inspired by the recent success of multitask learning, the resulting features are then fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks. The inference is reached in terms of the total reconstruction error accumulated from all tasks. The novelty of this paper includes 1) the development of three component-specific feature descriptors; 2) the introduction of multitask learning into sparse representation model; 3) the numerical implementation of proposed method; and 4) extensive comparative experimental studies on MSTAR SAR dataset, including target recognition under standard operating conditions, as well as extended operating conditions, and the capability of outliers rejection.",
"title": ""
},
{
"docid": "12f8414a2cadd222c31805de8bb3ed87",
"text": "In this paper we explore functions of bounded variation. We discuss properties of functions of bounded variation and consider three related topics. The related topics are absolute continuity, arc length, and the Riemann-Stieltjes integral.",
"title": ""
},
{
"docid": "f043acf163d787c4a53924515b509aba",
"text": "A two-wheeled self-balancing robot is a special type of wheeled mobile robot, its balance problem is a hot research topic due to its unstable state for controlling. In this paper, human transporter model has been established. Kinematic and dynamic models are constructed and two control methods: Proportional-integral-derivative (PID) and Linear-quadratic regulator (LQR) are implemented to test the system model in which controls of two subsystems: self-balance (preventing system from falling down when it moves forward or backward) and yaw rotation (steering angle regulation when it turns left or right) are considered. PID is used to control both two subsystems, LQR is used to control self-balancing subsystem only. By using simulation in Matlab, two methods are compared and discussed. The theoretical investigations for controlling the dynamic behavior are meaningful for design and fabrication. Finally, the result shows that LQR has a better performance than PID for self-balancing subsystem control.",
"title": ""
},
{
"docid": "ec4dcce4f53e38909be438beeb62b1df",
"text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.",
"title": ""
},
{
"docid": "05cf044dcb3621a0190403a7961ecb00",
"text": "This paper describes a real-time beat tracking system that recognizes a hierarchical beat structure comprising the quarter-note, half-note, and measure levels in real-world audio signals sampled from popular-music compact discs. Most previous beat-tracking systems dealt with MIDI signals and had difficulty in processing, in real time, audio signals containing sounds of various instruments and in tracking beats above the quarter-note level. The system described here can process music with drums and music without drums and can recognize the hierarchical beat structure by using three kinds of musical knowledge: of onset times, of chord changes, and of drum patterns. This paper also describes several applications of beat tracking, such as beat-driven real-time computer graphics and lighting control.",
"title": ""
},
{
"docid": "572867885a16afc0af6a8ed92632a2a7",
"text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.",
"title": ""
},
{
"docid": "dd86d2530dfa9a44b84d85b9db18e200",
"text": "In order to extract entities of a fine-grained category from semi-structured data in web pages, existing information extraction systems rely on seed examples or redundancy across multiple web pages. In this paper, we consider a new zero-shot learning task of extracting entities specified by a natural language query (in place of seeds) given only a single web page. Our approach defines a log-linear model over latent extraction predicates, which select lists of entities from the web page. The main challenge is to define features on widely varying candidate entity lists. We tackle this by abstracting list elements and using aggregate statistics to define features. Finally, we created a new dataset of diverse queries and web pages, and show that our system achieves significantly better accuracy than a natural baseline.",
"title": ""
},
{
"docid": "b5fd22854e75a29507cde380999705a2",
"text": "This study presents a high-efficiency-isolated single-input multiple-output bidirectional (HISMB) converter for a power storage system. According to the power management, the proposed HISMB converter can operate at a step-up state (energy release) and a step-down state (energy storage). At the step-up state, it can boost the voltage of a low-voltage input power source to a high-voltage-side dc bus and middle-voltage terminals. When the high-voltage-side dc bus has excess energy, one can reversely transmit the energy. The high-voltage dc bus can take as the main power, and middle-voltage output terminals can supply powers for individual middle-voltage dc loads or to charge auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based HISMB converter accomplishes the bidirectional power control with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the energy of the leakage inductor of the coupled inductor can be recycled and released to the high-voltage-side dc bus and auxiliary power sources, and the voltage stresses on power switches can be greatly reduced. Moreover, the switching losses can be significantly decreased because of all power switches with zero-voltage-switching features. Therefore, the objectives of high-efficiency power conversion, electric isolation, bidirectional energy transmission, and various output voltage with different levels can be obtained. The effectiveness of the proposed HISMB converter is verified by experimental results of a kW-level prototype in practical applications.",
"title": ""
},
{
"docid": "c9380c87222af7c9f4116cc02a68060c",
"text": "Biatriospora (Ascomycota: Pleosporales, Biatriosporaceae) is a genus with unexplored diversity and poorly known ecology. This work expands the Biatriospora taxonomic and ecological concept by describing four new species found as endophytes of woody plants in temperate forests of the Czech Republic and in tropical regions, including Amazonia. Ribosomal DNA sequences, together with protein-coding genes (RPB2, EF1α), growth rates and morphology, were used for species delimitation and description. Ecological data gathered by this and previous studies and the inclusion of sequences deposited in public databases show that Biatriospora contains species that are endophytes of angiosperms in temperate and tropical regions as well as species that live in marine or estuarine environments. These findings show that this genus is more diverse and has more host associations than has been described previously. The possible adaptations enabling the broad ecological range of these fungi are discussed. Due to the importance that Biatriospora species have in bioprospecting natural products, we suggest that the species introduced here warrant further investigation.",
"title": ""
},
{
"docid": "7d7ea6239106f614f892701e527122e2",
"text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.",
"title": ""
}
] |
scidocsrr
|
44a0e2c56291d034dbeadad0624ca402
|
Image Transformer
|
[
{
"docid": "e26dcac5bd568b70f41d17925593e7ef",
"text": "Autoregressive generative models achieve the best results in density estimation tasks involving high dimensional data, such as images or audio. They pose density estimation as a sequence modeling task, where a recurrent neural network (RNN) models the conditional distribution over the next element conditioned on all previous elements. In this paradigm, the bottleneck is the extent to which the RNN can model long-range dependencies, and the most successful approaches rely on causal convolutions. Taking inspiration from recent work in meta reinforcement learning, where dealing with long-range dependencies is also essential, we introduce a new generative model architecture that combines causal convolutions with self attention. In this paper, we describe the resulting model and present state-of-the-art log-likelihood results on heavily benchmarked datasets: CIFAR-10 (2.85 bits per dim), 32× 32 ImageNet (3.80 bits per dim) and 64 × 64 ImageNet (3.52 bits per dim). Our implementation will be made available at anonymized.",
"title": ""
},
{
"docid": "15ce175cc7aa263ded19c0ef344d9a61",
"text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"title": ""
},
{
"docid": "e10b5a0363897f6e7cbb128a4d2f7cd7",
"text": "We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator’s objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.",
"title": ""
}
] |
[
{
"docid": "c3c58760970768b9a839184f9e0c5b29",
"text": "The anatomic structures in the female that prevent incontinence and genital organ prolapse on increases in abdominal pressure during daily activities include sphincteric and supportive systems. In the urethra, the action of the vesical neck and urethral sphincteric mechanisms maintains urethral closure pressure above bladder pressure. Decreases in the number of striated muscle fibers of the sphincter occur with age and parity. A supportive hammock under the urethra and vesical neck provides a firm backstop against which the urethra is compressed during increases in abdominal pressure to maintain urethral closure pressures above the rapidly increasing bladder pressure. This supporting layer consists of the anterior vaginal wall and the connective tissue that attaches it to the pelvic bones through the pubovaginal portion of the levator ani muscle, and the uterosacral and cardinal ligaments comprising the tendinous arch of the pelvic fascia. At rest the levator ani maintains closure of the urogenital hiatus. They are additionally recruited to maintain hiatal closure in the face of inertial loads related to visceral accelerations as well as abdominal pressurization in daily activities involving recruitment of the abdominal wall musculature and diaphragm. Vaginal birth is associated with an increased risk of levator ani defects, as well as genital organ prolapse and urinary incontinence. Computer models indicate that vaginal birth places the levator ani under tissue stretch ratios of up to 3.3 and the pudendal nerve under strains of up to 33%, respectively. Research is needed to better identify the pathomechanics of these conditions.",
"title": ""
},
{
"docid": "bacb761bc173a07bf13558e2e5419c2b",
"text": "Rejection sensitivity is the disposition to anxiously expect, readily perceive, and intensely react to rejection. In response to perceived social exclusion, highly rejection sensitive people react with increased hostile feelings toward others and are more likely to show reactive aggression than less rejection sensitive people in the same situation. This paper summarizes work on rejection sensitivity that has provided evidence for the link between anxious expectations of rejection and hostility after rejection. We review evidence that rejection sensitivity functions as a defensive motivational system. Thus, we link rejection sensitivity to attentional and perceptual processes that underlie the processing of social information. A range of experimental and diary studies shows that perceiving rejection triggers hostility and aggressive behavior in rejection sensitive people. We review studies that show that this hostility and reactive aggression can perpetuate a vicious cycle by eliciting rejection from those who rejection sensitive people value most. Finally, we summarize recent work suggesting that this cycle can be interrupted with generalized self-regulatory skills and the experience of positive, supportive relationships.",
"title": ""
},
{
"docid": "b6ef6733f10fd282fb5aefc1f676b51c",
"text": "An electronic business model is an important baseline for the development of e-commerce system applications. Essentially, it provides the design rationale for e-commerce systems from the business point of view. However, how an e-business model must be defined and specified is a largely open issue. Business decision makers tend to use the notion in a highly informal way, and usually there is a big gap between the business view and that of IT developers. Nevertheless, we show that conceptual modelling techniques from IT provide very useful tools for precisely pinning down what e-business models actually are, as well as for their structured specification. We therefore present a (lightweight) ontology of what should be in an e-business model. The key idea we propose and develop is that an e-business model ontology centers around the core concept of value, and expresses how value is created, interpreted and exchanged within a multi-party stakeholder network. Our e-business model ontology is part of a wider methodology for e-business modelling, called e3-valueTM , that is currently under development. It is based on a variety of industrial applications we are involved in, and it is illustrated by discussing a free Internet access service as an example.",
"title": ""
},
{
"docid": "79d5cb45b36a707727ecfcae0a091498",
"text": "We use 810 versions of the Linux kernel, released over a perio d of 14 years, to characterize the system’s evolution, using Lehman’s laws of software evolut i n as a basis. We investigate different possible interpretations of these laws, as reflected by diff erent metrics that can be used to quantify them. For example, system growth has traditionally been qua tified using lines of code or number of functions, but functional growth of an operating system l ike Linux can also be quantified using the number of system calls. In addition we use the availabili ty of the source code to track metrics, such as McCabe’s cyclomatic complexity, that have not been tr acked across so many versions previously. We find that the data supports several of Lehman’ s l ws, mainly those concerned with growth and with the stability of the process. We also make som e novel observations, e.g. that the average complexity of functions is decreasing with time, bu t this is mainly due to the addition of many small functions.",
"title": ""
},
{
"docid": "d527daf7ae59c7bcf0989cad3183efbe",
"text": "In today’s Web, Web services are created and updated on the fly. It’s already beyond the human ability to analysis them and generate the composition plan manually. A number of approaches have been proposed to tackle that problem. Most of them are inspired by the researches in cross-enterprise workflow and AI planning. This paper gives an overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community.",
"title": ""
},
{
"docid": "ca7efaff6d1ec3fa91e6812600b15121",
"text": "Existing approximate nearest neighbor search systems suffer from two fundamental problems that are of practical importance but have not received sufficient attention from the research community. First, although existing systems perform well for the whole database, it is difficult to run a search over a subset of the database. Second, there has been no discussion concerning the performance decrement after many items have been newly added to a system. We develop a reconfigurable inverted index (Rii) to resolve these two issues. Based on the standard IVFADC system, we design a data layout such that items are stored linearly. This enables us to efficiently run a subset search by switching the search method to a linear PQ scan if the size of a subset is small. Owing to the linear layout, the data structure can be dynamically adjusted after new items are added, maintaining the fast speed of the system. Extensive comparisons show that Rii achieves a comparable performance with state-of-the art systems such as Faiss.",
"title": ""
},
{
"docid": "1caf2d15e1f9c6fcacfcb46d8fdfc5b3",
"text": "Content Delivery Networks (CDNs) [79, 97] have received considerable research attention in the recent past. A few studies have investigated CDNs to categorize and analyze them, and to explore the uniqueness, weaknesses, opportunities, and future directions in this field. Peng presents an overview of CDNs [75]. His work describes the critical issues involved in designing and implementing an effective CDN, and surveys the approaches proposed in literature to address these problems. Vakali et al. [95] present a survey of CDN architecture and popular CDN service providers. The survey is focused on understanding the CDN framework and its usefulness. They identify the characteristics and current practices in the content networking domain, and present an evolutionary pathway for CDNs, in order to exploit the current content networking trends. Dilley et al. [29] provide an insight into the overall system architecture of the leading CDN, Akamai [1]. They provide an overview of the existing content delivery approaches and describe Akamai’s network infrastructure and its operations in detail. They also point out the technical challenges that are to be faced while constructing a global CDN like Akamai. Saroiu et al. [84] examine content delivery from the point of view of four content delivery systems: Hypertext Transfer Protocol (HTTP) Web traffic, the Akamai CDN, Gnutella [8, 25], and KaZaa [62, 66] peer-to-peer file sharing systems. They also present significant implications for large organizations, service providers, network infrastructure providers, and general content delivery providers. Kung et al. [60] describe a taxonomy for content networks and introduce a new class of content networks that perform “semantic aggregation and content-sensitive placement” of content. They classify content networks based on their attributes in two dimensions: content aggregation and content placement. Sivasubramanian et al. [89] identify the issues",
"title": ""
},
{
"docid": "552baf04d696492b0951be2bb84f5900",
"text": "We examined whether reduced perceptual specialization underlies atypical perception in autism spectrum disorder (ASD) testing classifications of stimuli that differ either along integral dimensions (prototypical integral dimensions of value and chroma), or along separable dimensions (prototypical separable dimensions of value and size). Current models of the perception of individuals with an ASD would suggest that on these tasks, individuals with ASD would be as, or more, likely to process dimensions as separable, regardless of whether they represented separable or integrated dimensions. In contrast, reduced specialization would propose that individuals with ASD would respond in a more integral manner to stimuli that differ along separable dimensions, and at the same time, respond in a more separable manner to stimuli that differ along integral dimensions. A group of nineteen adults diagnosed with high functioning ASD and seventeen typically developing participants of similar age and IQ, were tested on speeded and restricted classifications tasks. Consistent with the reduced specialization account, results show that individuals with ASD do not always respond more analytically than typically developed (TD) observers: Dimensions identified as integral for TD individuals evoke less integral responding in individuals with ASD, while those identified as separable evoke less analytic responding. These results suggest that perceptual representations are more broadly tuned and more flexibly represented in ASD. Autism Res 2017, 10: 1510-1522. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "5f39990b87532cd3189c7d4adb2cd144",
"text": "The abundance of data in the context of smart cities yields huge potential for data-driven businesses but raises unprecedented challenges on data privacy and security. Some of these challenges can be addressed merely through appropriate technical measures, while other issues can only be solved through strategic organizational decisions. In this paper, we present few cases from a real smart city project. We outline some exemplary data analytics scenarios and describe the measures that we adopt for a secure handling of data. Finally, we show how the chosen solutions impact the awareness of the public and acceptability of the project.",
"title": ""
},
{
"docid": "d647410661f83652e2a1be51c7ec878b",
"text": "The objective of this study was to assess the effect of the probiotic Lactobacillus murinus native strain (LbP2) on general clinical parameters of dogs with distemper-associated diarrhea. Two groups of dogs over 60 d of age with distemper and diarrhea were used in the study, which was done at the Animal Hospital of the Veterinary Faculty of the University of Uruguay, Montevideo, Uruguay. The dogs were treated orally each day for 5 d with the probiotic or with a placebo (vehicle without bacteria). Clinical parameters were assessed and scored according to a system specially designed for this study. Blood parameters were also measured. Administration of the probiotic significantly improved the clinical score of the patients, whereas administration of the placebo did not. Stool output, fecal consistency, mental status, and appetite all improved in the probiotic-treated dogs. These results support previous findings of beneficial effects with the probiotic L. murinus LbP2 in dogs. Thus, combined with other therapeutic measures, probiotic treatment appears to be promising for the management of canine distemper-associated diarrhea.",
"title": ""
},
{
"docid": "35b286999957396e1f5cab6e2370ed88",
"text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.",
"title": ""
},
{
"docid": "ab36fe1484f2ad3c9ffc6514bf1c56c5",
"text": "The design of array antenna is vital study for today’s Wireless communication system to achieve higher gain, highly directional beam and also to counteract the effect of fading while signal propagates through various corrupted environments. In this paper, the design and analysis of a 2x4 microstrip patch antenna array is introduced and a rat-race coupler is incorporated. The antenna array is designed to function in the C-band and is used to receive signals from the telemetry link of an Unmanned Air Vehicle. The transmitter in the aircraft radiates two other directional beams adjacent to the main lobe, called the left lobe (L) and the right lobe (R). The rat race coupler generates the sum and difference patterns by adding and subtracting the left lobe signals with the right lobe signals respectively to generate L+R and L-R signals. The array of square patch antenna provides frequency close to the designed operating frequency with an acceptable Directivity and Gain. The proposed antenna array is a high gain, low-cost, low weight Ground Control Station (GCS) antenna. This paper, aims at a VSWR less than 2 and bandwidth greater than 50 MHz and a high antenna gain. The simulation has been done by using Advanced Design System (A.D.S) software. Keywords— 2x4 microstrip patch antenna, Rat-race coupler, Inset feed, Square patch antenna",
"title": ""
},
{
"docid": "1d9bbafaf43b9b8f5fb3b90bb73782c0",
"text": "User experience (UX), as an immature research area, is still haunted by the challenges of defining the scope of UX in general and operationalising experiential qualities in particular. To explore the basic question whether UX constructs are measurable, we conducted semi-structured interviews with eleven UX professionals where a set of questions in relation to UX measurement were explored. The interviewees expressed scepticism as well as ambivalence towards UX measures and shared anecdotes related to such measures in different contexts. Besides, the data suggested that design-oriented UX professionals tended to be sceptical about UX measurement. To examine whether such an attitude prevailed in the HCI community, we conducted a survey with essentially the same set of questions used in the interviews. Altogether 367 responses were received; 170 of them were valid and analysed. The survey provided empirical evidence on this issue as a baseline for progress in UX measurement. Overall, results indicated that attitude was favourable and there were nuanced views on details of UX measurement, implying good prospects for its acceptance, given further progress in research and education in UX measurement where UX modelling grounded in theories can play a crucial role. Mutual recognition of the value of objective measures and subjective accounts of user experience can enhance the maturity of this area.",
"title": ""
},
{
"docid": "0b7ee38a5779c35249e2c9acf2a985ff",
"text": "The joint optimization of representation learning and clustering in the embedding space has experienced a breakthrough in recent years. In spite of the advance, clustering with representation learning has been limited to flat-level categories, which often involves cohesive clustering with a focus on instance relations. To overcome the limitations of flat clustering, we introduce hierarchically-clustered representation learning (HCRL), which simultaneously optimizes representation learning and hierarchical clustering in the embedding space. Compared with a few prior works, HCRL firstly attempts to consider a generation of deep embeddings from every component of the hierarchy, not just leaf components. In addition to obtaining hierarchically clustered embeddings, we can reconstruct data by the various abstraction levels, infer the intrinsic hierarchical structure, and learn the level-proportion features. We conducted evaluations with image and text domains, and our quantitative analyses showed competent likelihoods and the best accuracies compared with the baselines.",
"title": ""
},
{
"docid": "2855bbb6cfb91c11e6b7d7ab669ca912",
"text": "Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.",
"title": ""
},
{
"docid": "1c0eaeea7e1bfc777bb6e391eb190b59",
"text": "We review machine learning (ML)-based optical performance monitoring (OPM) techniques in optical communications. Recent applications of ML-assisted OPM in different aspects of fiber-optic networking including cognitive fault detection and management, network equipment failure prediction, and dynamic planning and optimization of software-defined networks are also discussed.",
"title": ""
},
{
"docid": "ee631c4cff3ff6ae99e1afa1ba4788d3",
"text": "Teleoperation can be improved if humans and robots work as partners, exchanging information and assisting one another to achieve common goals. In this paper, we discuss the importance of collaboration and dialogue in human-robot systems. We then present collaborative control, a system model in which human and robot collaborate, and describe its use in vehicle teleoperation.",
"title": ""
},
{
"docid": "313fd10dd4976448a99a40c0d75b4015",
"text": "This paper introduces distributional semantic similarity methods for automatically measuring the coherence of a set of words generated by a topic model. We construct a semantic space to represent each topic word by making use of Wikipedia as a reference corpus to identify context features and collect frequencies. Relatedness between topic words and context features is measured using variants of Pointwise Mutual Information (PMI). Topic coherence is determined by measuring the distance between these vectors computed using a variety of metrics. Evaluation on three data sets shows that the distributional-based measures outperform the state-of-the-art approach for this task.",
"title": ""
},
{
"docid": "20daad42c2587043562f3864f9e888c2",
"text": "In recent years, deep neural network approaches have naturally extended to the video domain, in their simplest case by aggregating per-frame classifications as a baseline for action recognition. A majority of the work in this area extends from the imaging domain, leading to visual-feature heavy approaches on temporal data. To address this issue we introduce “Let’s Dance”, a 1000 video dataset (and growing) comprised of 10 visually overlapping dance categories that require motion for their classification. We stress the important of human motion as a key distinguisher in our work given that, as we show in this work, visual information is not sufficient to classify motion-heavy categories. We compare our datasets’ performance using imaging techniques with UCF-101 and demonstrate this inherent difficulty. We present a comparison of numerous state-of-theart techniques on our dataset using three different representations (video, optical flow and multi-person pose data) in order to analyze these approaches. We discuss the motion parameterization of each of them and their value in learning to categorize online dance videos. Lastly, we release this dataset (and its three representations) for the research community to use.",
"title": ""
},
{
"docid": "1d507afcd430b70944bd7f460ee90277",
"text": "Moringa oleifera, or the horseradish tree, is a pan-tropical species that is known by such regional names as benzolive, drumstick tree, kelor, marango, mlonge, mulangay, nébéday, saijhan, and sajna. Over the past two decades, many reports have appeared in mainstream scientific journals describing its nutritional and medicinal properties. Its utility as a non-food product has also been extensively described, but will not be discussed herein, (e.g. lumber, charcoal, fencing, water clarification, lubricating oil). As with many reports of the nutritional or medicinal value of a natural product, there are an alarming number of purveyors of “healthful” food who are now promoting M. oleifera as a panacea. While much of this recent enthusiasm indeed appears to be justified, it is critical to separate rigorous scientific evidence from anecdote. Those who charge a premium for products containing Moringa spp. must be held to a high standard. Those who promote the cultivation and use of Moringa spp. in regions where hope is in short supply must be provided with the best available evidence, so as not to raise false hopes and to encourage the most fruitful use of scarce research capital. It is the purpose of this series of brief reviews to: (a) critically evaluate the published scientific evidence on M. oleifera, (b) highlight claims from the traditional and tribal medicinal lore and from non-peer reviewed sources that would benefit from further, rigorous scientific evaluation, and (c) suggest directions for future clinical research that could be carried out by local investigators in developing regions. This is the first of four planned papers on the nutritional, therapeutic, and prophylactic properties of Moringa oleifera. In this introductory paper, the scientific evidence for health effects are summarized in tabular format, and the strength of evidence is discussed in very general terms. A second paper will address a select few uses of Moringa in greater detail than they can be dealt with in the context of this paper. A third paper will probe the phytochemical components of Moringa in more depth. A fourth paper will lay out a number of suggested research projects that can be initiated at a very small scale and with very limited resources, in geographic regions which are suitable for Moringa cultivation and utilization. In advance of this fourth paper in the series, the author solicits suggestions and will gladly acknowledge contributions that are incorporated into the final manuscript. It is the intent and hope of the journal’s editors that such a network of small-scale, locally executed investigations might be successfully woven into a greater fabric which will have enhanced scientific power over similar small studies conducted and reported in isolation. Such an approach will have the added benefit that statistically sound planning, peer review, and multi-center coordination brings to a scientific investigation. Copyright: ©2005 Jed W. Fahey This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Contact: Jed W. Fahey Email: jfahey@jhmi.edu Received: September 15, 2005 Accepted: November 20, 2005 Published: December 1, 2005 The electronic version of this article is the complete one and can be found online at: http://www.TFLJournal.org/article.php/200512011",
"title": ""
}
] |
scidocsrr
|
a5675eb4ef1ca7f6ed0a268c40bc889a
|
Associative Compression Networks for Representation Learning
|
[
{
"docid": "146402a4b52f16b583e224cbf9a84119",
"text": "Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call \"Variational Mixture of Posteriors\" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
},
{
"docid": "a144b5969c30808f0314218248c48ed6",
"text": "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.",
"title": ""
}
] |
[
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "cf643602fc07aacbbbd21f249c85b857",
"text": "We propose an architecture that uses NAND flash memory to reduce main memory power in web server platforms. Our architecture uses a two level file buffer cache composed of a relatively small DRAM, which includes a primary file buffer cache, and a flash memory secondary file buffer cache. Compared to a conventional DRAM-only architecture, our architecture consumes orders of magnitude less idle power while remaining cost effective. This is a result of using flash memory, which consumes orders of magnitude less idle power than DRAM and is twice as dense. The client request behavior in web servers, allows us to show that the primary drawbacks of flash memory?endurance and long write latencies?can easily be overcome. In fact the wear-level aware management techniques that we propose are not heavily used.",
"title": ""
},
{
"docid": "f7a15a02c1f5d92d54d408b270ad86f1",
"text": "Davis (2001) developed a cognitive-behavioral model of pathological Internet use (PIU) in which the availability, and awareness of the Internet, psychopathologies such as depression, social anxiety or substance abuse, and situational cues providing reinforcement of Internet usage behaviors, interact to produce maladaptive cognitions(Charlton & Danforth, 2007). This model posits that user's cognition is responsible for PIU, and ineffective and/or behavioral symptoms in turn. To date, there has been no comprehensive study to test Davis' model. Lee, Choi et al. (2007) took the idea of cognitivebehavioral perspective (Davis, 2001) as an approach to developing tests of behavioral symptoms and negative outcomes of PIU respectively. Since their work was focused on developing the two tests, building or testing models of explicating PIU was not the main task. Thus, we attempted to reanalyze their data and to empirically explore and test a model of PIU.",
"title": ""
},
{
"docid": "f9de372e5f46ea3f3a7739c36ad999b2",
"text": "The business model literature is both rich and rapidly-growing. Authors identify special-purpose business and eBusiness models – and, increasingly, develop taxonomies of business models types. But, in searching for a comparatively simple way to understand the components of a “typical” internet business model, as part of our work for the EC research project SimWeb, we found that these taxonomies had little overlap and offered only a modest assistance to smaller companies trying to identify their own business identity. In this paper, therefore, we present the preliminary results of a three-year research project into appropriate business models for the online news and music industries. Having identified the problems, we describe the general taxonomies and components of Internet business models found in the literature, and explain our own core + component framework for developing an internet business model – using the online news industry as our example. We show how a combination of core and complementary components can be combined by any news-providing organisation for its Internet business model on the basis to its specific needs, resources and changing circumstances – and illustrate the usefulness of our this framework by means of “mini-case” examples of regional online newspapers in Germany. Cornelia C. Krüger, Paula M.C. Swatman, Kornelia van der Beek",
"title": ""
},
{
"docid": "18a86d2660d01974530549081b796482",
"text": "The strive for efficient and cost-effective photovoltaic (PV) systems motivated the power electronic design developed here. The work resulted in a dc–dc converter for module integration and distributed maximum power point tracking (MPPT) with a novel adaptive control scheme. The latter is essential for the combined features of high energy efficiency and high power quality over a wide range of operating conditions. The switching frequency is optimally modulated as a function of solar irradiance for power conversion efficiency maximization. With the rise of irradiance, the frequency is reduced to reach the conversion efficiency target. A search algorithm is developed to determine the optimal switching frequency step. Reducing the switching frequency may, however, compromise MPPT efficiency. Furthermore, it leads to increased ripple content. Therefore, to achieve a uniform high power quality under all conditions, interleaved converter cells are adaptively activated. The overall cost is kept low by selecting components that allow for implementing the functions at low cost. Simulation results show the high value of the module integrated converter for dc standalone and microgrid applications. A 400-W prototype was implemented at 0.14 Euro/W. Testing showed efficiencies above 95 %, taking into account all losses from power conversion, MPPT, and measurement and control circuitry.",
"title": ""
},
{
"docid": "1bfe17bba2d4a846f5745283594c1464",
"text": "Software engineers need to be able to create, modify, and analyze knowledge stored in software artifacts. A significant amount of these artifacts contain natural language, like version control commit messages, source code comments, or bug reports. Integrated software development environments (IDEs) are widely used, but they are only concerned with structured software artifacts – they do not offer support for analyzing unstructured natural language and relating this knowledge with the source code. We present an integration of natural language processing capabilities into the Eclipse framework, a widely used software IDE. It allows to execute NLP analysis pipelines through the Semantic Assistants framework, a service-oriented architecture for brokering NLP services based on GATE. We demonstrate a number of semantic analysis services helpful in software engineering tasks, and evaluate one task in detail, the quality analysis of source code comments.",
"title": ""
},
{
"docid": "f8763404f21e3bea6744a3fb51838569",
"text": "Search engine advertising in the present day is a pronounced component of the Web. Choosing the appropriate and relevant ad for a particular query and positioning of the ad critically impacts the probability of being noticed and clicked. It also strategically impacts the revenue, the search engine shall generate from a particular Ad. Needless to say, showing the user an Ad that is relevant to his/her need greatly improves users satisfaction. For all the aforesaid reasons, its of utmost importance to correctly determine the click-through rate (CTR) of ads in a system. For frequently appearing ads, CTR is empirically measurable, but for the new ads, other means have to be devised. In this paper we propose and establish a model to predict the CTRs of advertisements adopting Logistic Regression as the effective framework for representing and constructing conditions and vulnerabilities among variables. Logistic Regression is a type of probabilistic statistical classification model that predicts a binary response from a binary predictor, based on one or more predictor variables. Advertisements that have the most elevated to be clicked are chosen using supervised machine learning calculation. We tested Logistic Regression algorithm on a one week advertisement data of size around 25 GB by considering position and impression as predictor variables. Using this prescribed model we were able to achieve around 90% accuracy for CTR estimation.",
"title": ""
},
{
"docid": "c19edad92404453ce7429f0169d7be9a",
"text": "In this paper, we introduce the idea of automatically illustrating complex sentences as multimodal summaries that combine pictures, structure and simplified compressed text. By including text and structure in addition to pictures, multimodal summaries provide additional clues of what happened, who did it, to whom and how, to people who may have difficulty reading or who are looking to skim quickly. We present ROC-MMS, a system for automatically creating multimodal summaries (MMS) of complex sentences by generating pictures, textual summaries and structure. We show that pictures alone are insufficient to help people understand most sentences, especially for readers who are unfamiliar with the domain. An evaluation of ROC-MMS in the Wikipedia domain illustrates both the promise and challenge of automatically creating multimodal summaries.",
"title": ""
},
{
"docid": "33bb646417d0ebbe01747b97323df5d0",
"text": "Semantic search or text-to-video search in video is a novel and challenging problem in information and multimedia retrieval. Existing solutions are mainly limited to text-to-text matching, in which the query words are matched against the user-generated metadata. This kind of text-to-text search, though simple, is of limited functionality as it provides no understanding about the video content. This paper presents a state-of-the-art system for event search without any user-generated metadata or example videos, known as text-to-video search. The system relies on substantial video content understanding and allows for searching complex events over a large collection of videos. The proposed text-to-video search can be used to augment the existing text-to-text search for video. The novelty and practicality are demonstrated by the evaluation in NIST TRECVID 2014, where the proposed system achieves the best performance. We share our observations and lessons in building such a state-of-the-art system, which may be instrumental in guiding the design of the future system for video search and analysis.",
"title": ""
},
{
"docid": "a4061dd189b3fc7f01211d3db46dfc27",
"text": "As the Web has been growing exponentially, it has become increasingly difficult to search for desired information. In recent years, many domain-specific (vertical) search tools have been developed to serve the information needs of specific fields. This paper describes two approaches to building a domain-specific search tool. We report our experience in building two different tools in the nanotechnology domain -- (1) a server-side search engine, and (2) a client-side search agent. The designs of the two search systems are presented and discussed, and their strengths and weaknesses are compared. Some future research directions are also discussed.",
"title": ""
},
{
"docid": "2f7e58974d4fff932edb82b57ca9464d",
"text": "Binary neural networks (BNN) have been studied extensively since they run dramatically faster at lower memory and power consumption than floating-point networks, thanks to the efficiency of bit operations. However, contemporary BNNs whose weights and activations are both single bits suffer from severe accuracy degradation. To understand why, we investigate the representation ability, speed and bias/variance of BNNs through extensive experiments. We conclude that the error of BNNs are predominantly caused by the intrinsic instability (training time) and non-robustness (train & test time). Inspired by this investigation, we propose the Binary Ensemble Neural Network (BENN) which leverages ensemble methods to improve the performance of BNNs with limited efficiency cost. While ensemble techniques have been broadly believed to be only marginally helpful for strong classifiers such as deep neural networks, our analysis and experiments show that they are naturally a perfect fit to boost BNNs. We find that our BENN, which is faster and more robust than state-of-the-art binary networks, can even surpass the accuracy of the full-precision floating number network with the same architecture.",
"title": ""
},
{
"docid": "d4cd6414a9edbd6f07b4a0b5f298e2ba",
"text": "Measuring Semantic Textual Similarity (STS), between words/ terms, sentences, paragraph and document plays an important role in computer science and computational linguistic. It also has many applications over several fields such as Biomedical Informatics and Geoinformation. In this paper, we present a survey on different methods of textual similarity and we also reported about the availability of different software and tools those are useful for STS. In natural language processing (NLP), STS is a important component for many tasks such as document summarization, word sense disambiguation, short answer grading, information retrieval and extraction. We split out the measures for semantic similarity into three broad categories such as (i) Topological/Knowledge-based (ii) Statistical/ Corpus Based (iii) String based. More emphasis is given to the methods related to the WordNet taxonomy. Because topological methods, plays an important role to understand intended meaning of an ambiguous word, which is very difficult to process computationally. We also propose a new method for measuring semantic similarity between sentences. This proposed method, uses the advantages of taxonomy methods and merge these information to a language model. It considers the WordNet synsets for lexical relationships between nodes/words and a uni-gram language model is implemented over a large corpus to assign the information content value between the two nodes of different classes.",
"title": ""
},
{
"docid": "efb6564dfeaba75f11c5c006562c8006",
"text": "We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot's response database is compiled from professional medical and public health resources in order to provide reliable information to users. The system's backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.",
"title": ""
},
{
"docid": "3b26a62ec701f34c9876bd93c494412d",
"text": "Emotions affect many aspects of our daily lives including decision making, reasoning and physical wellbeing. Researchers have therefore addressed the detection of emotion from individuals' heart rate, skin conductance, pupil dilation, tone of voice, facial expression and electroencephalogram (EEG). This paper presents an algorithm for classifying positive and negative emotions from EEG. Unlike other algorithms that extract fuzzy rules from the data, the fuzzy rules used in this paper are obtained from emotion classification research reported in the literature and the classification output indicates both the type of emotion and its strength. The results show that the algorithm is more than 90 times faster than the widely used LIBSVM and the obtained average accuracy of 63.52 % is higher than previously reported using the same EEG dataset. This makes this algorithm attractive for real time emotion classification. In addition, the paper introduces a new oscillation feature computed from local minima and local maxima of the signal.",
"title": ""
},
{
"docid": "07e713880604e82559ccfeece0149228",
"text": "The modern research has found a variety of applications and systems with vastly varying requirements and characteristics in Wireless Sensor Networks (WSNs). The research has led to materialization of many application specific routing protocols which must be energy-efficient. As a consequence, it is becoming increasingly difficult to discuss the design issues requirements regarding hardware and software support. Implementation of efficient system in a multidisciplinary research such as WSNs is becoming very difficult. In this paper we discuss the design issues in routing protocols for WSNs by considering its various dimensions and metrics such as QoS requirement, path redundancy etc. The paper concludes by presenting",
"title": ""
},
{
"docid": "b7c9e2900423a0cd7cc21c3aa95ca028",
"text": "In this article, the state of the art of research on emotion work (emotional labor) is summarized with an emphasis on its effects on well-being. It starts with a definition of what emotional labor or emotion work is. Aspects of emotion work, such as automatic emotion regulation, surface acting, and deep acting, are discussed from an action theory point of view. Empirical studies so far show that emotion work has both positive and negative effects on health. Negative effects were found for emotional dissonance. Concepts related to the frequency of emotion expression and the requirement to be sensitive to the emotions of others had both positive and negative effects. Control and social support moderate relations between emotion work variables and burnout and job satisfaction. Moreover, there is empirical evidence that the cooccurrence of emotion work and organizational problems leads to high levels of burnout. D 2002 Published by Elsevier Science Inc.",
"title": ""
},
{
"docid": "9001def80e94598f1165a867f3f6a09b",
"text": "Microbial polyhydroxyalkanoates (PHA) have been developed as biodegradable plastics for the past many years. However, PHA still have only a very limited market. Because of the availability of large amount of shale gas, petroleum will not raise dramatically in price, this situation makes PHA less competitive compared with low cost petroleum based plastics. Therefore, two strategies have been adopted to meet this challenge: first, the development of a super PHA production strain combined with advanced fermentation processes to produce PHA at a low cost; second, the construction of functional PHA production strains with technology to control the precise structures of PHA molecules, this will allow the resulting PHA with high value added applications. The recent systems and synthetic biology approaches allow the above two strategies to be implemented. In the not so distant future, the new technology will allow PHA to be produced with a competitive price compared with petroleum-based plastics.",
"title": ""
}
] |
scidocsrr
|
f081ebb14f9defe925639e5c24a7554a
|
College Student Technology Use and Academic Performance
|
[
{
"docid": "3cfa45816c57cbbe1d86f7cce7f52967",
"text": "Video games have become one of the favorite activities of American children. A growing body of research is linking violent video game play to aggressive cognitions, attitudes, and behaviors. The first goal of this study was to document the video games habits of adolescents and the level of parental monitoring of adolescent video game use. The second goal was to examine associations among violent video game exposure, hostility, arguments with teachers, school grades, and physical fights. In addition, path analyses were conducted to test mediational pathways from video game habits to outcomes. Six hundred and seven 8th- and 9th-grade students from four schools participated. Adolescents who expose themselves to greater amounts of video game violence were more hostile, reported getting into arguments with teachers more frequently, were more likely to be involved in physical fights, and performed more poorly in school. Mediational pathways were found such that hostility mediated the relationship between violent video game exposure and outcomes. Results are interpreted within and support the framework of the General Aggression Model.",
"title": ""
}
] |
[
{
"docid": "88c5bcaa173584042939f9b879aa5b3d",
"text": "We present the old-but–new problem of data quality from a statistical perspective, in part with the goal of attracting more statisticians, especially academics, to become engaged in research on a rich set of exciting challenges. The data quality landscape is described, and its research foundations in computer science, total quality management and statistics are reviewed. Two case studies based on an EDA approach to data quality are used to motivate a set of research challenges for statistics that span theory, methodology and software tools.",
"title": ""
},
{
"docid": "4a6d09af4ced790146aea8625b063c21",
"text": "The success of Monte Carlo tree search (MCTS) in many games, where αβ-based search has failed, naturally raises the question whether Monte Carlo simulations will eventually also outperform traditional game-tree search in game domains where αβ -based search is now successful. The forte of αβ-based search are highly tactical deterministic game domains with a small to moderate branching factor, where efficient yet knowledge-rich evaluation functions can be applied effectively. In this paper, we describe an MCTS-based program for playing the game Lines of Action (LOA), which is a highly tactical slow-progression game exhibiting many of the properties difficult for MCTS. The program uses an improved MCTS variant that allows it to both prove the game-theoretical value of nodes in a search tree and to focus its simulations better using domain knowledge. This results in simulations superior in both handling tactics and ensuring game progression. Using the improved MCTS variant, our program is able to outperform even the world's strongest αβ-based LOA program. This is an important milestone for MCTS because the traditional game-tree search approach has been considered to be the better suited for playing LOA.",
"title": ""
},
{
"docid": "6ffb1d72b5a21bd5184d7eaeb7dbfadc",
"text": "Sentence compression holds promise for many applications ranging from summarisation to subtitle generation. The task is typically performed on isolated sentences without taking the surrounding context into account, even though most applications would operate over entire documents. In this paper we present a discourse informed model which is capable of producing document compressions that are coherent and informative. Our model is inspired by theories of local coherence and formulated within the framework of Integer Linear Programming. Experimental results show significant improvements over a state-of-the-art discourse agnostic approach.",
"title": ""
},
{
"docid": "391e5f6168e331a26b0b0133f9648603",
"text": "In this paper the development of a 1W DC/DC converter is presented. It is based on a flyback converter for simple applications. The transformer is designed in a form as a spiral coil on a PCB. The special feature of the converter is that the flyback converter is laid out without a core. The area (diameter) of a spiral coil should be bigger as a coil with a core because the air is not a good energy store. The advantage of a coreless transformer is the PCB-Design. The paper describes the theory of the flyback converter and shows a way of the implementation of the coreless planar transformer. An analysis shows some results by different geometry and frequency of the planar transformer.",
"title": ""
},
{
"docid": "4e7888845f5c139f109caea7b604cb91",
"text": "Elderly or disabled people usually need augmented nursing attention both in home and clinical environments, especially to perform bathing activities. The development of an assistive robotic bath system, which constitutes a central motivation of this letter, would increase the independence and safety of this procedure, ameliorating in this way the everyday life for this group of people. In general terms, the main goal of this letter is to enable natural, physical human–robot interaction, involving human-friendly and user-adaptive online robot motion planning and interaction control. For this purpose, we employ imitation learning using a leader–follower framework called coordinate change dynamic movement primitives (CC-DMP), in order to incorporate the expertise of professional carers for bathing sequences. In this letter, we propose a vision-based washing system, combining CC-DMP framework with a perception-based controller, to adapt the motion of robot's end effector on moving and deformable surfaces, such as a human body part. The controller guarantees globally uniformly asymptotic convergence to the leader movement primitive while ensuring avoidance of restricted areas, such as sensitive skin body areas. We experimentally tested our approach on a setup including the humanoid robot ARMAR-III and a Kinect v2 camera. The robot executes motions learned from the publicly available KIT whole-body human motion database, achieving good tracking performance in challenging interactive task scenarios.",
"title": ""
},
{
"docid": "9cf48e5fa2cee6350ac31f236696f717",
"text": "Komatiites are rare ultramafic lavas that were produced most commonly during the Archean and Early Proterozoic and less frequently in the Phanerozoic. These magmas provide a record of the thermal and chemical characteristics of the upper mantle through time. The most widely cited interpretation is that komatiites were produced in a plume environment and record high mantle temperatures and deep melting pressures. The decline in their abundance from the Archean to the Phanerozoic has been interpreted as primary evidence for secular cooling (up to 500‡C) of the mantle. In the last decade new evidence from petrology, geochemistry and field investigations has reopened the question of the conditions of mantle melting preserved by komatiites. An alternative proposal has been rekindled: that komatiites are produced by hydrous melting at shallow mantle depths in a subduction environment. This alternative interpretation predicts that the Archean mantle was only slightly (V100‡C) hotter than at present and implicates subduction as a process that operated in the Archean. Many thermal evolution and chemical differentiation models of the young Earth use the plume origin of komatiites as a central theme in their model. Therefore, this controversy over the mechanism of komatiite generation has the potential to modify widely accepted views of the Archean Earth and its subsequent evolution. This paper briefly reviews some of the pros and cons of the plume and subduction zone models and recounts other hypotheses that have been proposed for komatiites. We suggest critical tests that will improve our understanding of komatiites and allow us to better integrate the story recorded in komatiites into our view of early Earth evolution. 6 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dfc383a057aa4124dfc4237e607c321a",
"text": "Obfuscation is applied to large quantities of benign and malicious JavaScript throughout the web. In situations where JavaScript source code is being submitted for widespread use, such as in a gallery of browser extensions (e.g., Firefox), it is valuable to require that the code submitted is not obfuscated and to check for that property. In this paper, we describe NOFUS, a static, automatic classifier that distinguishes obfuscated and non-obfuscated JavaScript with high precision. Using a collection of examples of both obfuscated and non-obfuscated JavaScript, we train NOFUS to distinguish between the two and show that the classifier has both a low false positive rate (about 1%) and low false negative rate (about 5%). Applying NOFUS to collections of deployed JavaScript, we show it correctly identifies obfuscated JavaScript files from Alexa top 50 websites. While prior work conflates obfuscation with maliciousness (assuming that detecting obfuscation implies maliciousness), we show that the correlation is weak. Yes, much malware is hidden using obfuscation, but so is benign JavaScript. Further, applying NOFUS to known JavaScript malware, we show our classifier finds 15% of the files are unobfuscated, showing that not all malware is obfuscated.",
"title": ""
},
{
"docid": "1e4a502bfd4ae5ceffd922e48f8e364a",
"text": "A soft wearable robot, which is an emerging type of wearable robot, can take advantage of tendon-driven mechanisms with a Bowden cable. These tendon-driven mechanisms benefits soft wearable robots because the actuator can be remotely placed and the transmission is very compact. However, it is difficult to compensate the friction along the Bowden cable which makes it hard to control. This study proposes the use of a position-based impedance controller, which is robust to the nonlinear dynamics of the system and provides compliant interaction between robot, human, and environment. Additionally, to eliminate disturbances from unexpected tension of the antagonistic wire arising from friction, this study proposes a new type of slack enabling tendon actuator. It can eliminate friction force along the antagonistic wire by actively pushing the wire while preventing derailment of the wire from the spool.",
"title": ""
},
{
"docid": "f1f281bce1a71c3bce99077e76197560",
"text": "Probabilistic timed automata (PTA) combine discrete probabilistic choice, real time and nondeterminism. This paper presents a fully automatic tool for model checking PTA with respect to probabilistic and expected reachability properties. PTA are specified in Modest, a high-level compositional modelling language that includes features such as exception handling, dynamic parallelism and recursion, and thus enables model specification in a convenient fashion. For model checking, we use an integral semantics of time, representing clocks with bounded integer variables. This makes it possible to use the probabilistic model checker PRISM as analysis backend. We describe details of the approach and its implementation, and report results obtained for three different case studies.",
"title": ""
},
{
"docid": "30a17bdce5eb936aad1ddf56c285e808",
"text": "Currently, 4G mobile communication systems are supported by the 3GPP standard. In view of the significant increase in mobile data traffic, it is necessary to characterize it to improve the performance of current wireless networks. Indeed, video transmission and video streaming are fundamental assets for the upcoming smart cities and urban environments. Due to the high costs of deploying a real LTE system, emulation systems that consider real operating conditions emerge as a successful alternative. On the other hand, many studies with LTE simulations and emulations do not present information of basic adjustment parameters like the propagation model, nor of validation of the results with real conditions. This paper shows the validation with an ANOVA statistical analysis of an LTE emulation system developed in NS-3 for the live video streaming service. For the validation, different QoS parameters and real conditions have been used. Also, two protocols, namely RTMP and RTSP, have been tested. It is demonstrated that the emulation scenario is appropriate to characterize the traffic that will later allow to carry out a proper performance analysis of the service and technology under study.",
"title": ""
},
{
"docid": "5350ffea7a4187f0df11fd71562aba43",
"text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.",
"title": ""
},
{
"docid": "73aa31411894fbeeef44ccb1a5e9950e",
"text": "This paper synthesises the research literature on teachers‟ use of Information and Communication Technology (ICT) in primary and secondary schools in sub-Saharan Africa, with a particular emphasis on improving the quality of subject teaching and learning. We focus on the internal factors of influence on teachers‟ use, or lack of use, of technology in the classroom. Our discussion attends to perceptions and beliefs about ICT and their motivating effects, technological literacy and confidence levels, pedagogical expertise related to technology use, and the role of teacher education. These factors are discussed in light of significant infrastructure and other external issues. We conclude by drawing out a number of pedagogical implications for initial teacher education and professional development to bring schooling within developing contexts into the 21 st century.",
"title": ""
},
{
"docid": "a3aa869de6c0e008e1d354197d0760cd",
"text": "BACKGROUND\nWhile the cognitive theory of obsessive-compulsive disorder (OCD) is one of the most widely accepted accounts of the maintenance of the disorder in adults, no study to date has systematically evaluated the theory across children, adolescence and adults with OCD.\n\n\nMETHOD\nThis paper investigated developmental differences in the cognitive processing of threat in a sample of children, adolescents and adults with OCD. Using an idiographic assessment approach, as well as self-report questionnaires, this study evaluated cognitive appraisals of responsibility, probability, severity, thought-action fusion (TAF), thought-suppression, self-doubt and cognitive control. It was hypothesised that there would be age related differences in reported responsibility for harm, probability of harm, severity of harm, thought suppression, TAF, self-doubt and cognitive control.\n\n\nRESULTS\nResults of this study demonstrated that children with OCD reported experiencing fewer intrusive thoughts, which were less distressing and less uncontrollable than those experienced by adolescents and adults with OCD. Furthermore, responsibility attitudes, probability biases and thought suppression strategies were higher in adolescents and adults with OCD. Cognitive processes of TAF, perceived severity of harm, self-doubt and cognitive control were found to be comparable across age groups.\n\n\nCONCLUSIONS\nThese results suggest that the current cognitive theory of OCD needs to address developmental differences in the cognitive processing of threat. Furthermore, for a developmentally sensitive theory of OCD, further investigation is warranted into other possible age related maintenance factors. Implications of this investigation and directions for future research are discussed.",
"title": ""
},
{
"docid": "9ec2c66e67dd969e902b8db93f68dc61",
"text": "The target in a tracking sequence can be considered as a set of spatiotemporal data with various locations in different frames, and the problem how to extract spatiotemporal information of the target effectively has drawn increasing interest recently. In this paper, we exploit spatiotemporal information by different-scale-context aggregation through the proposed pyramid multi-directional recurrent network (PRNet) together with the FlowNet. The PRNet is proposed to memorize the multi-scale spatiotemporal information of self-structure of the target. The FlowNet is employed to capture motion information for discriminating targets from the background. And the two networks form the FPRNet, being trained jointly to learn more useful spatiotemporal representations for visual tracking. The proposed tracker is evaluated on OTB50, OTB100 and TC128 benchmarks, and the experimental results show that the proposed FPRNet can effectively address different challenging cases and achieve better performance than the state-of-theart trackers.",
"title": ""
},
{
"docid": "b9ed2e2f3367dcd2d1dee3d313016609",
"text": "OBJECTIVES\nTo describe the clinical characteristics and the role of chemotherapy in endodermal sinus tumor of the vagina.\n\n\nMETHOD\nTwo patients with endodermal sinus tumor of the vagina were presented focusing on the clinical manifestations and outcome of the chemotherapy.\n\n\nRESULTS\nPatient's age was quite young, 2 and 3 years old respectively. Vaginal bleeding and a polypoid and fragile tumor of the vagina were main clinical features. Elevated serum alpha-FP was found before chemotherapy and dropped dramatically to normal if the tumor was sensitive to chemotherapy. Diagnosis was made by pathology and alpha-FP immunohistochemical staining. Both two patient was well responded to cisplatin vincristine bleomycin (PVB) and cisplatin etoposide bleomycin (PEB) chemotherapy. Clinical and pathological complete remission was obtained after 2-3 courses of chemotherapy without radical surgery and radiotherapy.\n\n\nCONCLUSIONS\nEndodermal sinus tumor of vagina in infant was very sensitive to the chemotherapy. Serum alpha-FP was very useful in diagnosis and monitoring of the disease.",
"title": ""
},
{
"docid": "52bce24f8ec738f9b9dfd472acd6b101",
"text": "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5%, 80.6% and 40.7% respectively, which are the best reported results to date.",
"title": ""
},
{
"docid": "c5cc7fc9651ff11d27e08e1910a3bd20",
"text": "An omnidirectional circularly polarized (OCP) antenna operating at 28 GHz is reported and has been found to be a promising candidate for device-to-device (D2D) communications in the next generation (5G) wireless systems. The OCP radiation is realized by systematically integrating electric and magnetic dipole elements into a compact disc-shaped configuration (9.23 mm $^{3} =0.008~\\lambda _{0}^{3}$ at 28 GHz) in such a manner that they are oriented in parallel and radiate with the proper phase difference. The entire antenna structure was printed on a single piece of dielectric substrate using standard PCB manufacturing technologies and, hence, is amenable to mass production. A prototype OCP antenna was fabricated on Rogers 5880 substrate and was tested. The measured results are in good agreement with their simulated values and confirm the reported design concepts. Good OCP radiation patterns were produced with a measured peak realized RHCP gain of 2.2 dBic. The measured OCP overlapped impedance and axial ratio bandwidth was 2.2 GHz, from 26.5 to 28.7 GHz, an 8 % fractional bandwidth, which completely covers the 27.5 to 28.35 GHz band proposed for 5G cellular systems.",
"title": ""
},
{
"docid": "0b86a006b1f8e3a5e940daef25fe7d58",
"text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.",
"title": ""
},
{
"docid": "9b3db8c2632ad79dc8e20435a81ef2a1",
"text": "Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.",
"title": ""
},
{
"docid": "9b575699e010919b334ac3c6bc429264",
"text": "Over the last decade, keyword search over relational data has attracted considerable attention. A possible approach to face this issue is to transform keyword queries into one or more SQL queries to be executed by the relational DBMS. Finding these queries is a challenging task since the information they represent may be modeled across different elements where the data of interest is stored, but also to find out how these elements are interconnected. All the approaches that have been proposed so far provide a monolithic solution. In this work, we, instead, divide the problem into three steps: the first one, driven by the user's point of view, takes into account what the user has in mind when formulating keyword queries, the second one, driven by the database perspective, considers how the data is represented in the database schema. Finally, the third step combines these two processes. We present the theory behind our approach, and its implementation into a system called QUEST (QUEry generator for STructured sources), which has been deeply tested to show the efficiency and effectiveness of our approach. Furthermore, we report on the outcomes of a number of experimental results that we",
"title": ""
}
] |
scidocsrr
|
5533a5677bee74d1ce2551ae72f46b24
|
Efficient Hierarchical Embedding for Learning Coherent Visual Styles
|
[
{
"docid": "a8c1224f291df5aeb655a2883b16bcfb",
"text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"title": ""
},
{
"docid": "87199b3e7def1db3159dc6b5989638aa",
"text": "We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"title": ""
},
{
"docid": "fa82b75a3244ef2407c2d14c8a3a5918",
"text": "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.",
"title": ""
}
] |
[
{
"docid": "d9980c59c79374c5b1ee107d6a5c978f",
"text": "A software module named flash translation layer (FTL) running in the controller of a flash SSD exposes the linear flash memory to the system as a block storage device. The effectiveness of an FTL significantly impacts the performance and durability of a flash SSD. In this research, we propose a new FTL called PCFTL (Plane-Centric FTL), which fully exploits plane-level parallelism supported by modern flash SSDs. Its basic idea is to allocate updates onto the same plane where their associated original data resides on so that the write distribution among planes is balanced. Furthermore, it utilizes fast intra-plane copy-back operations to transfer valid pages of a victim block when a garbage collection occurs. We largely extend a validated simulation environment called SSDsim to implement PCFTL. Comprehensive experiments using realistic enterprise-scale workloads are performed to evaluate its performance with respect to mean response time and durability in terms of standard deviation of writes per plane. Experimental results demonstrate that compared with the well-known DFTL, PCFTL improves performance and durability by up to 47 and 80 percent, respectively. Compared with its earlier version (called DLOOP), PCFTL enhances durability by up to 74 percent while delivering a similar I/O performance.",
"title": ""
},
{
"docid": "b156acf3a04c8edd6e58c859009374d6",
"text": "Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph substructures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.",
"title": ""
},
{
"docid": "7d1348ad0dbd8f33373e556009d4f83a",
"text": "Laryngeal neoplasms represent 2% of all human cancers. They befall mainly the male sex, especially between 50 and 70 years of age, but exceptionally may occur in infancy or extreme old age. Their occurrence has increased considerably inclusively due to progressive population again. The present work aims at establishing a relation between this infirmity and its prognosis in patients submitted to the treatment recommended by Departament of Otolaryngology and Head Neck Surgery of the School of Medicine of São José do Rio Preto. To this effect, by means of karyometric optical microscopy, cell nuclei in the glottic region of 20 individuals, divided into groups according to their tumor stage and time of survival, were evaluated. Following comparation with a control group and statistical analsis, it became possible to verify that the lesser diameter of nuclei is of prognostic value for initial tumors in this region.",
"title": ""
},
{
"docid": "98ead4f3cee84b4db8be568ec125c786",
"text": "This paper assesses the potential impact of FinTech on the finance industry, focusing on financial stability and access to services. I document first that financial services remain surprisingly expensive, which explains the emergence of new entrants. I then argue that the current regulatory approach is subject to significant political economy and coordination costs, and therefore unlikely to deliver much structural change. FinTech, on the other hand, can bring deep changes but is likely to create significant regulatory challenges.",
"title": ""
},
{
"docid": "546d682835f486d9297a2d7550abebfe",
"text": "Patient generated data or personal clinical data in general is considered an important aspect in improving patient outcomes. However, personal clinical data is difficult to collect and manage due to their distributed nature, i.e., located over multiple places such as doctor's office, radiology center, hospitals, or some clinics, and heterogeneous data types such as text, image, chart, or paper based documents. In case of emergency, this situation makes necessary personal clinical data retrieval almost impossible. In addition, since the amount and types of personal clinical data continue to grow, finding relevant clinical data when needed is getting more difficult if no actions are taken. In response to such scenarios, we propose an approach that manages personal health data by utilizing meta-data for organization and easy retrieval of clinical data and cloud storage for easy access and sharing with caregivers to implement the continuity of care and evidence-based treatment. In case of emergency, we make critical medical information such as current medication and allergies available to relevant caregivers with valid license numbers only.",
"title": ""
},
{
"docid": "7fd396ca8870c3a2fe99e63f24aaf9f7",
"text": "This paper presents a one-point calibration gaze tracking method based on eyeball kinematics using stereo cameras. By using two cameras and two light sources, the optic axis of the eye can be estimated. One-point calibration is required to estimate the angle of the visual axis from the optic axis. The eyeball rotates with optic and visual axes based on the eyeball kinematics (Listing's law). Therefore, we introduced eyeball kinematics to the one-point calibration process in order to properly estimate the visual axis. The prototype system was developed and it was found that the accuracy was under 1° around the center and bottom of the display.",
"title": ""
},
{
"docid": "a49058990cd1a68a4d7ac79dbf43e475",
"text": "In this paper we introduce a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner of what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking the words as they appear in the text. Dependency trees fit directly into this idea, while in case of constituency trees some simple additional steps should be made. Sn-grams can be applied in any NLP task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. SVM classifier for several profile sizes was used. We used as baseline traditional n-grams of words, POS tags and characters. Obtained results are better when applying sn-grams.",
"title": ""
},
{
"docid": "413fc3fba281b4a9f56db7b2c9708acd",
"text": "This paper addresses the important problem of discerning hateful content in social media. We propose a detection scheme that is an ensemble of Recurrent Neural Network (RNN) classifiers, and it incorporates various features associated with userrelated information, such as the users’ tendency towards racism or sexism. These data are fed as input to the above classifiers along with the word frequency vectors derived from the textual content. Our approach has been evaluated on a publicly available corpus of 16k tweets, and the results demonstrate its effectiveness in comparison to existing state of the art solutions. More specifically, our scheme can successfully distinguish racism and sexism messages from normal text, and achieve higher classification quality than current state-of-the-art algorithms.",
"title": ""
},
{
"docid": "1692c2af87c66826e4f3c7d5da108579",
"text": "Inspired by biological swarms, robotic swarms are envisioned to solve real-world problems that are difficult for individual agents. Biological swarms can achieve collective intelligence based on local interactions and simple rules; however, designing effective distributed policies for large-scale robotic swarms to achieve a global objective can be challenging. Although it is often possible to design an optimal centralized strategy for smaller numbers of agents, those methods can fail as the number of agents increases. Motivated by the growing success of machine learning, we develop a deep learning approach that learns distributed coordination policies from centralized policies. In contrast to traditional distributed control approaches, which are usually based on human-designed policies for relatively simple tasks, this learning-based approach can be adapted to more difficult tasks. We demonstrate the efficacy of our proposed approach on two different tasks, the well-known rendezvous problem and a more difficult particle assignment problem. For the latter, no known distributed policy exists. From extensive simulations, it is shown that the performance of the learned coordination policies is comparable to the centralized policies, surpassing state-of-the-art distributed policies. Thereby, our proposed approach provides a promising alternative for real-world coordination problems that would be otherwise computationally expensive to solve or intangible to explore.",
"title": ""
},
{
"docid": "3b0b6075cf6cdb13d592b54b85cdf4af",
"text": "We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-totext rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task.",
"title": ""
},
{
"docid": "f0933abbb9df13a12522d87171dae151",
"text": "Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a \"reference\" or \"perfect\" image in some perceptual space. Such \"full-reference\" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for \"human consumption\". Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1].",
"title": ""
},
{
"docid": "785a6d08ef585302d692864d09b026fe",
"text": "Linear Discriminant Analysis (LDA) is a well-known method for dimensionality reduction and classification. LDA in the binaryclass case has been shown to be equivalent to linear regression with the class label as the output. This implies that LDA for binary-class classifications can be formulated as a least squares problem. Previous studies have shown certain relationship between multivariate linear regression and LDA for the multi-class case. Many of these studies show that multivariate linear regression with a specific class indicator matrix as the output can be applied as a preprocessing step for LDA. However, directly casting LDA as a least squares problem is challenging for the multi-class case. In this paper, a novel formulation for multivariate linear regression is proposed. The equivalence relationship between the proposed least squares formulation and LDA for multi-class classifications is rigorously established under a mild condition, which is shown empirically to hold in many applications involving high-dimensional data. Several LDA extensions based on the equivalence relationship are discussed.",
"title": ""
},
{
"docid": "902e6d047605a426ae9bebc3f9ddf139",
"text": "Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "13b2677b61042769669e403db652fc9e",
"text": "The design of social media influences the way people interact with each other online -- and shapes our society. This course will look at three key areas: how social identity is portrayed online; our changing social networks and the technologies that support them; and the role of pseudonymity in supporting privacy.",
"title": ""
},
{
"docid": "b94c18b8d3915709d03b94cffe979363",
"text": "We apply the weight of evidence reformulation of AdaBoosted naive Bayes scoring due to Ridgeway et al. (1998) to the problem of diagnosing insurance claim fraud. The method effectively combines the advantages of boosting and the explanatory power of the weight of evidence scoring framework. We present the results of an experimental evaluation with an emphasis on discriminatory power, ranking ability, and calibration of probability estimates. The data to which we apply the method consists of closed personal injury protection (PIP) automobile insurance claims from accidents that occurred in Massachusetts (USA) during 1993 and were previously investigated for suspicion of fraud by domain experts. The data mimic the most commonly occurring data configuration, that is, claim records consisting of information pertaining to several binary fraud indicators. The findings of the study reveal the method to be a valuable contribution to the design of intelligible, accountable, and efficient fraud detection support.",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "62d39d41523bca97939fa6a2cf736b55",
"text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.",
"title": ""
},
{
"docid": "eb4b29fe1b388349c6b020a381fdce63",
"text": "Hybrid AC/DC microgrids have been planned for the better interconnection of different distributed generation systems (DG) to the power grid, and exploiting the prominent features of both ac and dc microgrids. Connecting these microgrids requires an interlinking AC/DC converter (IC) with a proper power management and control strategy. During the islanding operation of the hybrid AC/DC microgrid, the IC is intended to take the role of supplier to one microgrid and at the same time acts as a load to the other microgrid and the power management system should be able to share the power demand between the existing AC and dc sources in both microgrids. This paper considers the power flow control and management issues amongst multiple sources dispersed throughout both ac and dc microgrids. The paper proposes a decentralized power sharing method in order to eliminate the need for any communication between DGs or microgrids. This hybrid microgrid architecture allows different ac or dc loads and sources to be flexibly located in order to decrease the required power conversions stages and hence the system cost and efficiency. The performance of the proposed power control strategy is validated for different operating conditions, using simulation studies in the PSCAD/EMTDC software environment.",
"title": ""
},
{
"docid": "fc164dc2d55cec2867a99436d37962a1",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
},
{
"docid": "4aebb6566c8b27c7528cc108bacc2a60",
"text": "OBJECT\nSuperior cluneal nerve (SCN) entrapment neuropathy is a poorly understood clinical entity that can produce low-back pain. The authors report a less-invasive surgical treatment for SCN entrapment neuropathy that can be performed with local anesthesia.\n\n\nMETHODS\nFrom November 2010 through November 2011, the authors performed surgery in 34 patients (age range 18-83 years; mean 64 years) with SCN entrapment neuropathy. The entrapment was unilateral in 13 patients and bilateral in 21. The mean postoperative follow-up period was 10 months (range 6-18 months). After the site was blocked with local anesthesia, the thoracolumbar fascia of the orifice was dissected with microscissors in a distal-to-rostral direction along the SCN to release the entrapped nerve.\n\n\nRESULTS\nwere evaluated according to Japanese Orthopaedic Association (JOA) and Roland-Morris Disability Questionnaire (RMDQ) scores. Results In all 34 patients, the SCN penetrated the orifice of the thoracolumbar fascia and could be released by dissection of the fascia. There were no intraoperative surgery-related complications. For all patients, surgery was effective; JOA and RMDQ scores indicated significant improvement (p < 0.05).\n\n\nCONCLUSIONS\nFor patients with low-back pain, SCN entrapment neuropathy must be considered as a causative factor. Treatment by less-invasive surgery, with local anesthesia, yielded excellent clinical outcomes.",
"title": ""
}
] |
scidocsrr
|
3b96830e04e374b3bdefdc8f28ffc178
|
Enhanced Characterness for Text Detection in the Wild
|
[
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
}
] |
[
{
"docid": "58037d23c73d41cccdf7376d25207cfc",
"text": "The word preprocessing is a crucial stage of OCR systems. Here we present two algorithms appropriate for this stage. The first one corrects the skewing of words and the second removes the slant from handwritten words. Both algorithms make use of the Wigner-Vill e Distribution and the projection profile technique. The algorithms have been tested on words taken from more than 200 writers and the results obtained can be considered very satisfactory, since the overall accuracy of our OCR system is notably improved.",
"title": ""
},
{
"docid": "351beace260a731aaf8dcf6e6870ad99",
"text": "The field of Explainable Artificial Intelligence has taken steps towards increasing transparency in the decision-making process of machine learning models for classification tasks. Understanding the reasons behind the predictions of models increases our trust in them and lowers the risks of using them. In an effort to extend this to other tasks apart from classification, this thesis explores the interpretability aspect for sequence tagging models for the task of Named Entity Recognition (NER). This work proposes two approaches for adapting LIME, an interpretation method for classification, to sequence tagging and NER. The first approach is a direct adaptation of LIME to the task, while the second includes adaptations following the idea that entities are conceived as a group of words and we would like one explanation for the whole entity. Given the challenges in the evaluation of the interpretation method, this work proposes an extensive evaluation from different angles. It includes a quantitative analysis using the AOPC metric; a qualitative analysis that studies the explanations at instance and dataset levels as well as the semantic structure of the embeddings and the explanations; and a human evaluation to validate the model's behaviour. The evaluation has discovered patterns and characteristics to take into account when explaining NER models.",
"title": ""
},
{
"docid": "d29ca3ca682433a9ea6172622d12316c",
"text": "The phenomenon of a phantom limb is a common experience after a limb has been amputated or its sensory roots have been destroyed. A complete break of the spinal cord also often leads to a phantom body below the level of the break. Furthermore, a phantom of the breast, the penis, or of other innervated body parts is reported after surgical removal of the structure. A substantial number of children who are born without a limb feel a phantom of the missing part, suggesting that the neural network, or 'neuromatrix', that subserves body sensation has a genetically determined substrate that is modified by sensory experience.",
"title": ""
},
{
"docid": "b9d78f22647d00aab0a79aa0c5dacdcf",
"text": "Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input z to a sample x that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input y′ to a sample x. Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts.",
"title": ""
},
{
"docid": "4acfb49be406de472af9080d3cdc6fa4",
"text": "Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.",
"title": ""
},
{
"docid": "7af4db4bea89236b9d21a55bf9c32f4f",
"text": "Random-effects regression models have become increasingly popular for analysis of longitudinal data. A key advantage of the random-effects approach is that it can be applied when subjects are not measured at the same number of timepoints. In this article we describe use of random-effects pattern-mixture models to further handle and describe the influence of missing data in longitudinal studies. For this approach, subjects are first divided into groups depending on their missing-data pattern and then variables based on these groups are used as model covariates. In this way, researchers are able to examine the effect of missing-data patterns on the outcome (or outcomes) of interest. Furthermore, overall estimates can be obtained by averaging over the missing-data patterns. A psychiatric clinical trials data set is used to illustrate the random-effects pattern-mixture approach to longitudinal data analysis with missing data.",
"title": ""
},
{
"docid": "17dce24f26d7cc196e56a889255f92a8",
"text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.",
"title": ""
},
{
"docid": "b2d8c0397151ca043ffb5cef8046d2af",
"text": "This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.",
"title": ""
},
{
"docid": "9a0afab5034bdae8235e834d5f0f5c79",
"text": "Millimeter-waves offer promising opportunities and interesting challenges to silicon integrated circuit and system designers. These challenges go beyond standard circuit design questions and span a broader range of topics including wave propagation, antenna design, and communication channel capacity limits. It is only meaningful to evaluate the benefits and shortcoming of silicon-based mm-wave integrated circuits in this broader context. This paper reviews some of these issues and presents several solutions to them.",
"title": ""
},
{
"docid": "4147fee030667122923f420ab55e38f7",
"text": "In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.",
"title": ""
},
{
"docid": "9db779a5a77ac483bb1991060dca7c28",
"text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.",
"title": ""
},
{
"docid": "edd78912d764ab33e0e1a8124bc7d709",
"text": "Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance. Conventional approaches aggregate separate models of natural language understanding (NLU) and system action prediction (SAP) as a pipeline that is sensitive to noisy outputs of error-prone NLU. To address the issues, we propose an end-to-end deep recurrent neural network with limited contextual dialogue memory by jointly training NLU and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our proposed model significantly outperforms the state-of-the-art pipeline models for both NLU and SAP, which indicates that our joint model is capable of mitigating the affects of noisy NLU outputs, and NLU model can be refined by error flows backpropagating from the extra supervised signals of system actions.",
"title": ""
},
{
"docid": "bcfc8566cf73ec7c002dcca671e3a0bd",
"text": "of the thoracic spine revealed a 1.1 cm intradural extramedullary mass at the level of the T2 vertebral body (Figure 1a). Spinal neurosurgery was planned due to exacerbation of her chronic back pain and progressive weakness of the lower limbs at 28 weeks ’ gestation. Emergent spinal decompression surgery was performed with gross total excision of the tumour. Doppler fl ow of the umbilical artery was used preoperatively and postoperatively to monitor fetal wellbeing. Th e histological examination revealed HPC, World Health Organization (WHO) grade 2 (Figure 1b). Complete recovery was seen within 1 week of surgery. Follow-up MRI demonstrated complete removal of the tumour. We recommended adjuvant external radiotherapy to the patient in the 3rd trimester of pregnancy due to HPC ’ s high risk of recurrence. However, the patient declined radiotherapy. Routine weekly obstetric assessments were performed following surgery. At the 37th gestational week, a 2,850 g, Apgar score 7 – 8, healthy infant was delivered by caesarean section, without need of admission to the neonatal intensive care unit. Adjuvant radiotherapy was administered to the patient in the postpartum period.",
"title": ""
},
{
"docid": "1b79cbc8735ce74fccceca04ca78dc37",
"text": "We derive a mean-field algorithm for binary classification with gaussian processes that is based on the TAP approach originally proposed in statistical physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error, which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler naive mean-field theory and support vector machines (SVMs) as limiting cases. For both mean-field algorithms and support vector machines, simulation results for three small benchmark data sets are presented. They show that one may get state-of-the-art performance by using the leave-one-out estimator for model selection and the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The second result is taken as strong support for the internal consistency of the mean-field approach.",
"title": ""
},
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
},
{
"docid": "c7c462e6c0575bef245d1d52ce456cfd",
"text": "It is often difficult to visualize large networks effectively. In BioReact, we filter large systems biology network data by querying to select partial network as the input for visualization. Each query is parameterized by a node name, the direction of graph search, and the scope of the search. We present two layouts of the same network to clearly show network topology: a force-directed layout expands neighbouring nodes to maiximize spatial separation between nodes and links, and a downward edge layout to preserve a sense of unidirectional flow. Navigation of the network such as locating a particular node/link and linked highlighting between multiple views optimize user experience.",
"title": ""
},
{
"docid": "14bcbfcb6e7165e67247453944f37ac0",
"text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.",
"title": ""
},
{
"docid": "d11a113fdb0a30e2b62466c641e49d6d",
"text": "Apache Spark has emerged as the de facto framework for big data analytics with its advanced in-memory programming model and upper-level libraries for scalable machine learning, graph analysis, streaming and structured data processing. It is a general-purpose cluster computing framework with language-integrated APIs in Scala, Java, Python and R. As a rapidly evolving open source project, with an increasing number of contributors from both academia and industry, it is difficult for researchers to comprehend the full body of development and research behind Apache Spark, especially those who are beginners in this area. In this paper, we present a technical review on big data analytics using Apache Spark. This review focuses on the key components, abstractions and features of Apache Spark. More specifically, it shows what Apache Spark has for designing and implementing big data algorithms and pipelines for machine learning, graph analysis and stream processing. In addition, we highlight some research and development directions on Apache Spark for big data analytics.",
"title": ""
},
{
"docid": "3f4fcbc355d7f221eb6c9bc4a26b0448",
"text": "BACKGROUND\nMost of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined.\n\n\nMETHODS\nTwo independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: \"fMRI AND happy faces,\" \"fMRI AND sad faces,\" \"fMRI AND fearful faces,\" \"fMRI AND angry faces,\" \"fMRI AND disgusted faces\" and \"fMRI AND neutral faces.\" We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses.\n\n\nRESULTS\nOf the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions.\n\n\nLIMITATIONS\nAlthough the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes.\n\n\nCONCLUSION\nOur study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions.",
"title": ""
},
{
"docid": "9fdf625f46c227c819cec1e4c00160b1",
"text": "Employment of ground-based positioning systems has been consistently growing over the past decades due to the growing number of applications that require location information where the conventional satellite-based systems have limitations. Such systems have been successfully adopted in the context of wireless emergency services, tactical military operations, and various other applications offering location-based services. In current and previous generation of cellular systems, i.e., 3G, 4G, and LTE, the base stations, which have known locations, have been assumed to be stationary and fixed. However, with the possibility of having mobile relays in 5G networks, there is a demand for novel algorithms that address the challenges that did not exist in the previous generations of localization systems. This paper includes a review of various fundamental techniques, current trends, and state-of-the-art systems and algorithms employed in wireless position estimation using moving receivers. Subsequently, performance criteria comparisons are given for the aforementioned techniques and systems. Moreover, a discussion addressing potential research directions when dealing with moving receivers, e.g., receiver's movement pattern for efficient and accurate localization, non-line-of-sight problem, sensor fusion, and cooperative localization, is briefly given.",
"title": ""
}
] |
scidocsrr
|
c1ccf0ab2cc8b0ab4b6b4e749da4841e
|
Learning and Evaluating Musical Features with Deep Autoencoders
|
[
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
}
] |
[
{
"docid": "1d14030535d03f5ce7a593920e4af352",
"text": "We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowdsourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens’ science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility.",
"title": ""
},
{
"docid": "be66c05a023ea123a6f32614d2a8af93",
"text": "During the past three decades, the issue of processing spectral phase has been largely neglected in speech applications. There is no doubt that the interest of speech processing community towards the use of phase information in a big spectrum of speech technologies, from automatic speech and speaker recognition to speech synthesis, from speech enhancement and source separation to speech coding, is constantly increasing. In this paper, we elaborate on why phase was believed to be unimportant in each application. We provide an overview of advancements in phase-aware signal processing with applications to speech, showing that considering phase-aware speech processing can be beneficial in many cases, while it can complement the possible solutions that magnitude-only methods suggest. Our goal is to show that phase-aware signal processing is an important emerging field with high potential in the current speech communication applications. The paper provides an extended and up-to-date bibliography on the topic of phase aware speech processing aiming at providing the necessary background to the interested readers for following the recent advancements in the area. Our review expands the step initiated by our organized special session and exemplifies the usefulness of spectral phase information in a wide range of speech processing applications. Finally, the overview will provide some future work directions.",
"title": ""
},
{
"docid": "db53ffe2196586d570ad636decbf67de",
"text": "We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.",
"title": ""
},
{
"docid": "e882a33ff28c37b379c22d73e16147b3",
"text": "Combining ant colony optimization (ACO) and multiobjective evolutionary algorithm based on decomposition (MOEA/D), this paper proposes a multiobjective evolutionary algorithm, MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single objective optimization problems. Each ant (i.e. agent) is responsible for solving one subproblem. All the ants are divided into a few groups and each ant has several neighboring ants. An ant group maintains a pheromone matrix and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group’s pheromone matrix, its own heuristic information matrix and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two set of test problems. On the multiobjective 0-1 knapsack problem, MOEA/D-ACO outperforms MOEA/D-GA on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the “learning while optimizing” principle, is effective in improving multiobjective optimization algorithms.",
"title": ""
},
{
"docid": "a3b919ee9780c92668c0963f23983f82",
"text": "A terrified woman called police because her ex-boyfriend was breaking into her home. Upon arrival, police heard screams coming from the basement. They stopped halfway down the stairs and found the ex-boyfriend pointing a rifle at the floor. Officers observed a strange look on the subject’s face as he slowly raised the rifle in their direction. Both officers fired their weapons, killing the suspect. The rifle was not loaded.",
"title": ""
},
{
"docid": "4120db07953e7577ba6be77eef6ebca9",
"text": "Previous works indicated that pairwise methods are stateofthe-art approaches to fit users’ taste from implicit feedback. In this paper, we argue that constructing item pairwise samples for a fixed user is insufficient, because taste differences between two users with respect to a same item can not be explicitly distinguished. Moreover, the rank position of positive items are not used as a metric to measure the learning magnitude in the next step. Therefore, we firstly define a confidence function to dynamically control the learning step-size for updating model parameters. Sequently, we introduce a generic way to construct mutual pairwise loss from both users’ and items’ perspective. Instead of useroriented pairwise sampling strategy alone, we incorporate item pairwise samples into a popular pairwise learning framework, bayesian personalized ranking (BPR), and propose mutual bayesian personalized ranking (MBPR) method. In addition, a rank-aware adaptively sampling strategy is proposed to come up with the final approach, called RankMBPR. Empirical studies are carried out on four real-world datasets, and experimental results in several metrics demonstrate the efficiency and effectiveness of our proposed method, comparing with other baseline algorithms.",
"title": ""
},
{
"docid": "5ac0e1b30f3aeeb4e1f7ddae656f7dd5",
"text": "The present paper describes an implementation of fast running motions involving a humanoid robot. Two important technologies are described: a motion generation and a balance control. The motion generation is a unified way to design both walking and running and can generate the trajectory with the vertical conditions of the Center Of Mass (COM) in short calculation time. The balance control enables a robot to maintain balance by changing the positions of the contact foot dynamically when the robot is disturbed. This control consists of 1) compliance control without force sensors, in which the joints are made compliant by feed-forward torques and adjustment of gains of position control, and 2) feedback control, which uses the measured orientation of the robot's torso used in the motion generation as an initial condition to decide the foot positions. Finally, a human-sized humanoid robot that can run forward at 7.0 [km/h] is presented.",
"title": ""
},
{
"docid": "32c17e821ba1311be2b18d0303b2d1a3",
"text": "We consider the problem of improving the efficiency of random ized Fourier feature maps to accelerate training and testing speed of kernel methods on large dat asets. These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e.g., Gaussian kernel). In this paper, we propose to use Quasi-Monte Carlo(QMC) approximations instead, where the relevant integrands are evaluated on a low-discrepancy sequence of points as opposed to random point sets as in the Monte Carlo approach. We derive a new disc repancy measure called box discrepancy based on theoretical characterizations of the integration error with respect to a given sequence. We then propose to learn QMC sequences adapted to our setting based o n explicit box discrepancy minimization. Our theoretical analyses are complemented with empirical r esults that demonstrate the effectiveness of classical and adaptive QMC techniques for this problem.",
"title": ""
},
{
"docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3",
"text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.",
"title": ""
},
{
"docid": "6acc820f32c74ff30730faca2eff9f8f",
"text": "The conventional Vivaldi antenna is known for its ultrawideband characteristic, but low directivity. In order to improve the directivity, a double-slot structure is proposed to design a new Vivaldi antenna. The two slots are excited in uniform amplitude and phase by using a T-junction power divider. The double-slot structure can generate plane-like waves in the E-plane of the antenna. As a result, directivity of the double-slot Vivaldi antenna is significantly improved by comparison to a conventional Vivaldi antenna of the same size. The measured results show that impedance bandwidth of the double-slot Vivaldi antenna is from 2.5 to 15 GHz. Gain and directivity of the proposed antenna is considerably improved at the frequencies above 6 GHz. Furthermore, the main beam splitting at high frequencies of the conventional Vivaldi antenna on thick dielectric substrates is eliminated by the double-slot structure.",
"title": ""
},
{
"docid": "50c78e339e472f1b1814687f7d0ec8c6",
"text": "Frontonasal dysplasia (FND) refers to a class of midline facial malformations caused by abnormal development of the facial primordia. The term encompasses a spectrum of severities but characteristic features include combinations of ocular hypertelorism, malformations of the nose and forehead and clefting of the facial midline. Several recent studies have drawn attention to the importance of Alx homeobox transcription factors during craniofacial development. Most notably, loss of Alx1 has devastating consequences resulting in severe orofacial clefting and extreme microphthalmia. In contrast, mutations of Alx3 or Alx4 cause milder forms of FND. Whilst Alx1, Alx3 and Alx4 are all known to be expressed in the facial mesenchyme of vertebrate embryos, little is known about the function of these proteins during development. Here, we report the establishment of a zebrafish model of Alx-related FND. Morpholino knock-down of zebrafish alx1 expression causes a profound craniofacial phenotype including loss of the facial cartilages and defective ocular development. We demonstrate for the first time that Alx1 plays a crucial role in regulating the migration of cranial neural crest (CNC) cells into the frontonasal primordia. Abnormal neural crest migration is coincident with aberrant expression of foxd3 and sox10, two genes previously suggested to play key roles during neural crest development, including migration, differentiation and the maintenance of progenitor cells. This novel function is specific to Alx1, and likely explains the marked clinical severity of Alx1 mutation within the spectrum of Alx-related FND.",
"title": ""
},
{
"docid": "f90eebfcf87285efe711968c85f04d1b",
"text": "Fouling is generally defined as the accumulation and formation of unwanted materials on the surfaces of processing equipment, which can seriously deteriorate the capacity of the surface to transfer heat under the temperature difference conditions for which it was designed. Fouling of heat transfer surfaces is one of the most important problems in heat transfer equipment. Fouling is an extremely complex phenomenon. Fundamentally, fouling may be characterized as a combined, unsteady state, momentum, mass and heat transfer problem with chemical, solubility, corrosion and biological processes may also taking place. It has been described as the major unresolved problem in heat transfer1. According to many [1-3], fouling can occur on any fluid-solid surface and have other adverse effects besides reduction of heat transfer. It has been recognized as a nearly universal problem in design and operation, and it affects the operation of equipment in two ways: Firstly, the fouling layer has a low thermal conductivity. This increases the resistance to heat transfer and reduces the effectiveness of heat exchangers. Secondly, as deposition occurs, the cross sectional area is reduced, which causes an increase in pressure drop across the apparatus. In industry, fouling of heat transfer surfaces has always been a recognized phenomenon, although poorly understood. Fouling of heat transfer surfaces occurs in most chemical and process industries, including oil refineries, pulp and paper manufacturing, polymer and fiber production, desalination, food processing, dairy industries, power generation and energy recovery. By many, fouling is considered the single most unknown factor in the design of heat exchangers. This situation exists despite the wealth of operating experience accumulated over the years and accumulation of the fouling literature. This lake of understanding almost reflects the complex nature of the phenomena by which fouling occurs in industrial equipment. The wide range of the process streams and operating conditions present in industry tends to make most fouling situations unique, thus rendering a general analysis of the problem difficult. In general, the ability to transfer heat efficiently remains a central feature of many industrial processes. As a consequence much attention has been paid to improving the understanding of heat transfer mechanisms and the development of suitable correlations and techniques that may be applied to the design of heat exchangers. On the other hand relatively little consideration has been given to the problem of surface fouling in heat exchangers. The",
"title": ""
},
{
"docid": "70d8345da0193a048d3dff702834c075",
"text": "Recurrent neural networks with various types of hidden units have been used to solve a diverse range of problems involving sequence data. Two of the most recent proposals, gated recurrent units (GRU) and minimal gated units (MGU), have shown comparable promising results on example public datasets. In this paper, we introduce three model variants of the minimal gated unit which further simplify that design by reducing the number of parameters in the forget-gate dynamic equation. These three model variants, referred to simply as MGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset and the real sequences from the Reuters Newswire Topics (RNT) dataset. Here, we report on the RNT results. The new models have shown similar accuracy to the MGU model while using fewer parameters and thus lower training expense. One model variant, namely MGU2, performed better than MGU on the datasets considered, and thus may be used as an alternate to MGU or GRU in recurrent neural networks.",
"title": ""
},
{
"docid": "2126c47fe320af2d908ec01a426419ce",
"text": "Stretching has long been used in many physical activities to increase range of motion (ROM) around a joint. Stretching also has other acute effects on the neuromuscular system. For instance, significant reductions in maximal voluntary strength, muscle power or evoked contractile properties have been recorded immediately after a single bout of static stretching, raising interest in other stretching modalities. Thus, the effects of dynamic stretching on subsequent muscular performance have been questioned. This review aimed to investigate performance and physiological alterations following dynamic stretching. There is a substantial amount of evidence pointing out the positive effects on ROM and subsequent performance (force, power, sprint and jump). The larger ROM would be mainly attributable to reduced stiffness of the muscle-tendon unit, while the improved muscular performance to temperature and potentiation-related mechanisms caused by the voluntary contraction associated with dynamic stretching. Therefore, if the goal of a warm-up is to increase joint ROM and to enhance muscle force and/or power, dynamic stretching seems to be a suitable alternative to static stretching. Nevertheless, numerous studies reporting no alteration or even performance impairment have highlighted possible mitigating factors (such as stretch duration, amplitude or velocity). Accordingly, ballistic stretching, a form of dynamic stretching with greater velocities, would be less beneficial than controlled dynamic stretching. Notwithstanding, the literature shows that inconsistent description of stretch procedures has been an important deterrent to reaching a clear consensus. In this review, we highlight the need for future studies reporting homogeneous, clearly described stretching protocols, and propose a clarified stretching terminology and methodology.",
"title": ""
},
{
"docid": "3fcbff9e9dea1300edc5de7a764d7ae9",
"text": "Optimization Involving Expensive Black-Box Objective and Constraint Functions Rommel G. Regis Mathematics Department, Saint Joseph’s University, Philadelphia, PA 19131, USA, rregis@sju.edu August 23, 2010 Abstract. This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed algorithm, called ConstrLMSRBF, uses radial basis function (RBF) surrogate models and is an extension of the Local Metric Stochastic RBF (LMSRBF) algorithm by Regis and Shoemaker (2007a) that can handle black-box inequality constraints. Previous algorithms for the optimization of expensive functions using surrogate models have mostly dealt with bound constrained problems where only the objective function is expensive, and so, the surrogate models are used to approximate the objective function only. In contrast, ConstrLMSRBF builds RBF surrogate models for the objective function and also for all the constraint functions in each iteration, and uses these RBF models to guide the selection of the next point where the objective and constraint functions will be evaluated. Computational results indicate that ConstrLMSRBF is better than alternative methods on 9 out of 14 test problems and on the MOPTA08 problem from the automotive industry (Jones 2008). The MOPTA08 problem has 124 decision variables and 68 inequality constraints and is considered a large-scale problem in the area of expensive black-box optimization. The alternative methods include a Mesh Adaptive Direct Search (MADS) algorithm (Abramson and Audet 2006, Audet and Dennis 2006) that uses a kriging-based surrogate model, the Multistart LMSRBF algorithm by Regis and Shoemaker (2007a) modified to handle black-box constraints via a penalty approach, a genetic algorithm, a pattern search algorithm, a sequential quadratic programming algorithm, and COBYLA (Powell 1994), which is a derivative-free trust-region algorithm. Based on the results of this study, the results in Jones (2008) and other approaches presented at the ISMP 2009 conference, ConstrLMSRBF appears to be among the best, if not the best, known algorithm for the MOPTA08 problem in the sense of providing the most improvement from an initial feasible solution within a very limited number of objective and constraint function evaluations.",
"title": ""
},
{
"docid": "b999fe9bd7147ef9c555131d106ea43e",
"text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "8fdfebc612ff46103281fcdd7c9d28c8",
"text": "We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented. Wir entwickeln einen Algorithmus mit kürzesten alternierenden Wegen für das lineare Zuordnungsproblem. Er enthält neue Routinen für die Anfangswerte und eine spezielle Implementierung der Kürzesten-Wege-Methode von Dijkstra. Sowohl für dichte als auch für dünne Probleme zeigen Testläufe, daß unser Algorithmus gleichmäßig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.",
"title": ""
},
{
"docid": "acd4de9f6324cc9d3fd9560094c71542",
"text": "Similarity search is one of the fundamental problems for large scale multimedia applications. Hashing techniques, as one popular strategy, have been intensively investigated owing to the speed and memory efficiency. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised methods learn hashing function by treating each training example equally while ignoring the different semantic degree related to the label, i.e. semantic confidence, of different examples. In this paper, we propose a novel semi-supervised hashing framework by leveraging semantic confidence. Specifically, a confidence factor is first assigned to each example by neighbor voting and click count in the scenarios with label and click-through data, respectively. Then, the factor is incorporated into the pairwise and triplet relationship learning for hashing. Furthermore, the two learnt relationships are seamlessly encoded into semi-supervised hashing methods with pairwise and listwise supervision respectively, which are formulated as minimizing empirical error on the labeled data while maximizing the variance of hash bits or minimizing quantization loss over both the labeled and unlabeled data. In addition, the kernelized variant of semi-supervised hashing is also presented. We have conducted experiments on both CIFAR-10 (with label) and Clickture (with click data) image benchmarks (up to one million image examples), demonstrating that our approaches outperform the state-of-the-art hashing techniques.",
"title": ""
},
{
"docid": "674da28b87322e7dfc7aad135d44ae55",
"text": "As the technology migrates into the deep submicron manufacturing(DSM) era, the critical dimension of the circuits is getting smaller than the lithographic wavelength. The unavoidable light diffraction phenomena in the sub-wavelength technologies have become one of the major factors in the yield rate. Optical proximity correction (OPC) is one of the methods adopted to compensate for the light diffraction effect as a post layout process.However, the process is time-consuming and the results are still limited by the original layout quality. In this paper, we propose a maze routing method that considers the optical effect in the routing algorithm. By utilizing the symmetrical property of the optical system, the light diffraction is efficiently calculated and stored in tables. The costs that guide the router to minimize the optical interferences are obtained from these look-up tables. The problem is first formulated as a constrained maze routing problem, then it is shown to be a multiple constrained shortest path problem. Based on the Lagrangian relaxation method, an effective algorithm is designed to solve the problem.",
"title": ""
},
{
"docid": "d80d52806cbbdd6148e3db094eabeed7",
"text": "We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thought was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's illumination from its image, which would then allow correction of the image colors to those relative to a standard illuminant, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting `scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of he ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.",
"title": ""
}
] |
scidocsrr
|
6881dbb4cb0a85e70b8af77b7e59cdd0
|
Prototyping nfv-based multi-access edge computing in 5G ready networks with open baton
|
[
{
"docid": "29d02d7219cb4911ab59681e0c70a903",
"text": "As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.",
"title": ""
},
{
"docid": "fc9babe40365e5dc943fccf088f7a44f",
"text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.",
"title": ""
},
{
"docid": "802de1032f66e3e10a712fadb07ef432",
"text": "In this article, we provided a tutorial on MEC technology and an overview of the MEC framework and architecture recently defined by the ETSI MEC ISG standardization group. We described some examples of MEC deployment, with special reference to IoT uses since the IoT is recognized as a main driver for 5G. After having also discussed benefits and challenges for MEC toward 5G, we can say that MEC has definitely a window of opportunity to contribute to the creation of a common layer of integration for the IoT world. One of the main questions still open is: How will this technology coexist with LTE advanced pro and the future 5G network? For this aspect, we foresee the need for very strong cooperation between 3GPP and ETSI (e.g., NFV and possibly other SDOs) to avoid unnecessary duplication in the standard. In this sense, MEC could pave the way and be natively integrated in the network of tomorrow.",
"title": ""
}
] |
[
{
"docid": "464e2798a866449532f2d8e72575ac9d",
"text": "Fake news has become a hotly debated topic in journalism. In this paper, we present our entry to the 2017 Fake News Challenge which models the detection of fake news as a stance classification task that finished in 11th place on the leader board. Our entry is an ensemble system of classifiers developed by students in the context of their coursework. We show how we used the stacking ensemble method for this purpose and obtained improvements in classification accuracy exceeding each of the individual models’ performance on the development data. Finally, we discuss aspects of the experimental setup of the challenge.",
"title": ""
},
{
"docid": "696f4ba578134d699658b6c303adb4f6",
"text": "This paper is concerned with the event-triggered finite-time control scheme for unicycle robots. First, Lagrange method is used to model the unicycle robot at the roll and pitch axis. Second, on the basis of the established model, an event-triggered finite-time control scheme is proposed to balance the unicycle robot in finite time and to determine whether or not control input should be updated. The control input should be only updated when the triggering condition is violated. As a result, the switching energy of actor can be saved. Third, a stability criterion on unicycle robots with the proposed event-trigged finite-time control scheme is derived by using a Lyapunov method. Finally, the effectiveness of the event-triggered finite-time control scheme is illustrated for unicycle robots.",
"title": ""
},
{
"docid": "3258be27b22be228d2eae17c91a20664",
"text": "In any non-deterministic environment, unexpected events can indicate true changes in the world (and require behavioural adaptation) or reflect chance occurrence (and must be discounted). Adaptive behaviour requires distinguishing these possibilities. We investigated how humans achieve this by integrating high-level information from instruction and experience. In a series of EEG experiments, instructions modulated the perceived informativeness of feedback: Participants performed a novel probabilistic reinforcement learning task, receiving instructions about reliability of feedback or volatility of the environment. Importantly, our designs de-confound informativeness from surprise, which typically co-vary. Behavioural results indicate that participants used instructions to adapt their behaviour faster to changes in the environment when instructions indicated that negative feedback was more informative, even if it was simultaneously less surprising. This study is the first to show that neural markers of feedback anticipation (stimulus-preceding negativity) and of feedback processing (feedback-related negativity; FRN) reflect informativeness of unexpected feedback. Meanwhile, changes in P3 amplitude indicated imminent adjustments in behaviour. Collectively, our findings provide new evidence that high-level information interacts with experience-driven learning in a flexible manner, enabling human learners to make informed decisions about whether to persevere or explore new options, a pivotal ability in our complex environment.",
"title": ""
},
{
"docid": "8c0c7d6554f21b4cb5e155cf1e33a165",
"text": "Despite progress, early childhood development (ECD) remains a neglected issue, particularly in resource-poor countries. We analyse the challenges and opportunities that ECD proponents face in advancing global priority for the issue. We triangulated among several data sources, including 19 semi-structured interviews with individuals involved in global ECD leadership, practice, and advocacy, as well as peer-reviewed research, organisation reports, and grey literature. We undertook a thematic analysis of the collected data, drawing on social science scholarship on collective action and a policy framework that elucidates why some global initiatives are more successful in generating political priority than others. The analysis indicates that the ECD community faces two primary challenges in advancing global political priority. The first pertains to framing: generation of internal consensus on the definition of the problem and solutions, agreement that could facilitate the discovery of a public positioning of the issue that could generate political support. The second concerns governance: building of effective institutions to achieve collective goals. However, there are multiple opportunities to advance political priority for ECD, including an increasingly favourable political environment, advances in ECD metrics, and the existence of compelling arguments for investment in ECD. To advance global priority for ECD, proponents will need to surmount the framing and governance challenges and leverage these opportunities.",
"title": ""
},
{
"docid": "c76b9790d8015dd330d927f0e5ee45e5",
"text": "The fast-moving evolution of wireless networks, which started less than three decades ago, has resulted in worldwide connectivity and influenced the development of a global market in all related areas. However, in recent years, the growing user traffic demands have led to the saturation of licensed and unlicensed frequency bands regarding capacity and load-over-time. On the physical layer the used spectrum efficiency is already close to Shannon’s limit; however the traffic demand continues to grow, forcing mobile network operators and equipment manufacturers to evaluate more effective strategies of the wireless medium access. One of these strategies, called cell densification, implies there are a growing number of serving entities, with the appropriate reduction of the per-cell coverage area. However, if implemented blindly, this approach will lead to a significant growth in the average interference level and overhead control signaling, which are both required to allow sufficient user mobility. Furthermore, the interference is also affected by the increasing variety of radio access technologies (RATs) and applications, often deployed without the necessary level of cooperation with technologies that are already in place. To overcome these problems today’s telecommunication standardization groups are trying to collaborate. That is why the recent agenda of the fifth generation wireless networks (5G) includes not only the development schedules for the particular technologies but also implies there should be an expansion of the appropriate interconnection techniques. In this thesis, we describe and evaluate the concept of heterogeneous networks (HetNets), which involve the cooperation between several RATs. In the introductory part, we discuss the set of the problems, related to HetNets, and review the HetNet development process. Moreover, we show the evolution of existing and potential segments of the multi-RAT 5G network, together with the most promising applications, which could be used in future HetNets. Further, in the thesis, we describe the set of key representative scenarios, including three-tier WiFi-LTE multi-RAT deployment, MTC-enabled LTE, and the mmWave-based network. For each of these scenarios, we define a set of unsolved issues and appropriate solutions. For the WiFi-LTE multi-RAT scenario, we develop the framework, enabling intelligent and flexible resource allocation between the involved RATs. For MTC-enabled LTE, we study the effect of massive MTC deployments on the performance of LTE random access procedure and propose some basic methods to improve its efficiency. Finally, for the mmWave scenario, we study the effects of connectivity strategies, human body blockage and antenna array configuration on the overall network performance. Next, we develop a set of validated analytical and simulation-based techniques which allow us to evaluate the performance of proposed solutions. At the end of the introductory part a set of HetNet-related demo activities is demonstrated.",
"title": ""
},
{
"docid": "3cc6f2f69faa765b194c0e16049a0318",
"text": "Convolutional neural networks rely on image texture and structure to serve as discriminative features to classify the image content. Image enhancement techniques can be used as preprocessing steps to help improve the overall image quality and in turn improve the overall effectiveness of a CNN. Existing image enhancement methods, however, are designed to improve the perceptual quality of an image for a human observer. In this paper, we are interested in learning CNNs that can emulate image enhancement and restoration, but with the overall goal to improve image classification and not necessarily human perception. To this end, we present a unified CNN architecture that uses a range of enhancement filters that can enhance image-specific details via end-to-end dynamic filter learning. We demonstrate the effectiveness of this strategy on four challenging benchmark datasets for fine-grained, object, scene, and texture classification: CUB-200-2011, PASCAL-VOC2007, MIT-Indoor, and DTD. Experiments using our proposed enhancement show promising results on all the datasets. In addition, our approach is capable of improving the performance of all generic CNN architectures.",
"title": ""
},
{
"docid": "fe517545fc4dcc7bde881b7c96e66ecc",
"text": "Smoothness is characteristic of coordinated human movements, and stroke patients' movements seem to grow more smooth with recovery. We used a robotic therapy device to analyze five different measures of movement smoothness in the hemiparetic arm of 31 patients recovering from stroke. Four of the five metrics showed general increases in smoothness for the entire patient population. However, according to the fifth metric, the movements of patients with recent stroke grew less smooth over the course of therapy. This pattern was reproduced in a computer simulation of recovery based on submovement blending, suggesting that progressive blending of submovements underlies stroke recovery.",
"title": ""
},
{
"docid": "49108ff6bdebfef7295d4dc3681897e8",
"text": "Recognition of materials has proven to be a challenging problem due to the wide variation in appearance within and between categories. Global image context, such as where the material is or what object it makes up, can be crucial to recognizing the material. Existing methods, however, operate on an implicit fusion of materials and context by using large receptive fields as input (i.e., large image patches). Many recent material recognition methods treat materials as yet another set of labels like objects. Materials are, however, fundamentally different from objects as they have no inherent shape or defined spatial extent. Approaches that ignore this can only take advantage of limited implicit context as it appears during training. We instead show that recognizing materials purely from their local appearance and integrating separately recognized global contextual cues including objects and places leads to superior dense, per-pixel, material recognition. We achieve this by training a fully-convolutional material recognition network end-toend with only material category supervision. We integrate object and place estimates to this network from independent CNNs. This approach avoids the necessity of preparing an impractically-large amount of training data to cover the product space of materials, objects, and scenes, while fully leveraging contextual cues for dense material recognition. Furthermore, we perform a detailed analysis of the effects of context granularity, spatial resolution, and the network level at which we introduce context. On a recently introduced comprehensive and diverse material database [14], we confirm that our method achieves state-of-the-art accuracy with significantly less training data compared to past methods.",
"title": ""
},
{
"docid": "47a1db2dd3367a7ed2c7318911eb833a",
"text": "Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Taking both the algorithmic and system aspects into consideration, we develop a procedure for setting mini-batch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training.",
"title": ""
},
{
"docid": "d1e8107752dffbf8c47a45fd4ba5a403",
"text": "Emotional intelligence is an individual’s ability to perceive accurately, evaluate and express emotions. One of the instruments to measure emotional intelligence is the Wong and Law Emotional Intelligence Scale (WLEIS) which consist of four dimensions namely self-emotional appraisal, others’ emotional appraisal, regulation of emotion and use of emotion. The main aim of this research was to evaluate the psychometric properties of the Wong and Law Emotional Intelligence Scale (WLEIS). This was a survey research using a set of questionnaires. A total of 150 newly appointed administrative officers who were undergoing a compulsory course participated in this study. The instruments used were the Wong and Law Emotional Intelligence Scale (WLEIS), Organisational Commitment Questionnaire and the Satisfaction with Life Scale (SWLS). In evaluating the reliability of WLEIS, alpha Cronbach and split half methods were used. In addition, criterion and construct validity methods were used to test its validity. Results obtained showed that the Bahasa Malaysia version of the WLEIS was valid and using principal component analysis with varimax rotation method, four components were extracted with 75.1% variance. The WLEIS also showed good criterion validity from the significant correlations with the criteria of organizational commitment and satisfaction with life. Furthermore, the results of reliability were satisfactory with alpha Cronbach ranging from 0.83 to 0.92 for all the dimensions. Results of split half reliability also showed the instrument was reliable with the coefficient ranging from 0.81 to 0.95.",
"title": ""
},
{
"docid": "e05f857b063275500cf54d4596c646d4",
"text": "This paper is a contribution to the electric modeling of electrochemical cells. Specifically, cells for a new copper electrowinning process, which uses bipolar electrodes, are studied. Electrowinning is used together with solvent extraction and has gained great importance, due to its significant cost and environmental advantages, as compared to other copper reduction methods. Current electrowinning cells use unipolar electrodes connected electrically in parallel. Instead, bipolar electrodes, are connected in series. They are also called floating, because they are not wire-connected, but just immersed in the electrolyte. The main advantage of this technology is that, for the same copper production, a cell requires a much lower DC current, as compared with the unipolar case. This allows the cell to be supplied from a modular and compact PWM rectifier instead of a bulk high current thyristor rectifier, having a significant economic impact. In order to study the quality of the copper, finite difference algorithms in two dimensions are derived to obtain the distribution of the potential and the electric field inside the cell. Different geometrical configurations of cell and floating electrodes are analyzed. The proposed method is a useful tool for analysis and design of electrowinning cells, reducing the time-consuming laboratory implementations.",
"title": ""
},
{
"docid": "9b2cd501685570f1d27394372cce0103",
"text": "We present a transceiver chipset consisting of a four channel receiver (Rx) and a single-channel transmitter (Tx) designed in a 200-GHz SiGe BiCMOS technology. Each Rx channel has a conversion gain of 19 dB with a typical single sideband noise figure of 10 dB at 1-MHz offset. The Tx includes two exclusively-enabled voltage-controlled oscillators on the same die to switch between two bands at 76-77 and 77-81 GHz. The phase noise is -97 dBc/Hz at 1-MHz offset. On-wafer, the output power is 2 × 13 dBm. At 3.3-V supply, the Rx chip draws 240 mA, while the Tx draws 530 mA. The power dissipation for the complete chipset is 2.5 W. The two chips are used as vehicles for a 77-GHz package test. The chips are packaged using the redistribution chip package technology. We compare on-wafer measurements with on-board results. The loss at the RF port due to the transition in the package results to be less than 1 dB at 77 GHz. The results demonstrate an excellent potential of the presented millimeter-wave package concept for millimeter-wave applications.",
"title": ""
},
{
"docid": "cd6355ca627777997190c6a7a1d18762",
"text": "The main contribution of this paper is the description, design and experimental testing of a new brushless synchronous generator with no permanent magnets. Such an alternator is particularly attractive for automotive applications where brushes create liabilities such as reliability, limited lifetime and low field current through brushes. Due to the increasing power demand, there has been a growing interest in new alternator technologies that can replace the Lundell (or claw-pole) alternator. The alternator presented here has the potential to overcome the efficiency and power rating limits associated with the Lundell alternator without added manufacturing cost. This paper presents the design, control and electrical performance of such an alternator including analysis and experimental results.",
"title": ""
},
{
"docid": "7cc362ec57b9b4a8f0e5d9beaf0ed02f",
"text": "Conclusions Trading Framework Deep Learning has become a robust machine learning tool in recent years, and models based on deep learning has been applied to various fields. However, applications of deep learning in the field of computational finance are still limited[1]. In our project, Long Short Term Memory (LSTM) Networks, a time series version of Deep Neural Networks model, is trained on the stock data in order to forecast the next day‘s stock price of Intel Corporation (NASDAQ: INTC): our model predicts next day’s adjusted closing price based on information/features available until the present day. Based on the predicted price, we trade the Intel stock according to the strategy that we developed, which is described below. Locally Weighted Regression has also been performed in lieu of the unsupervised learning model for comparison.",
"title": ""
},
{
"docid": "0f10bb2afc1797fad603d8c571058ecb",
"text": "This paper presents findings from the All Wales Hate Crime Project. Most hate crime research has focused on discrete victim types in isolation. For the first time, internationally, this paper examines the psychological and physical impacts of hate crime across seven victim types drawing on quantitative and qualitative data. It contributes to the hate crime debate in two significant ways: (1) it provides the first look at the problem in Wales and (2) it provides the first multi-victim-type analysis of hate crime, showing that impacts are not homogenous across victim groups. The paper provides empirical credibility to the impacts felt by hate crime victims on the margins who have routinely struggled to gain support.",
"title": ""
},
{
"docid": "ecad03ca039000bdefe2ef70d5b65ec1",
"text": "BACKGROUND\nThe effectiveness of complex interventions, as well as their success in reaching relevant populations, is critically influenced by their implementation in a given context. Current conceptual frameworks often fail to address context and implementation in an integrated way and, where addressed, they tend to focus on organisational context and are mostly concerned with specific health fields. Our objective was to develop a framework to facilitate the structured and comprehensive conceptualisation and assessment of context and implementation of complex interventions.\n\n\nMETHODS\nThe Context and Implementation of Complex Interventions (CICI) framework was developed in an iterative manner and underwent extensive application. An initial framework based on a scoping review was tested in rapid assessments, revealing inconsistencies with respect to the underlying concepts. Thus, pragmatic utility concept analysis was undertaken to advance the concepts of context and implementation. Based on these findings, the framework was revised and applied in several systematic reviews, one health technology assessment (HTA) and one applicability assessment of very different complex interventions. Lessons learnt from these applications and from peer review were incorporated, resulting in the CICI framework.\n\n\nRESULTS\nThe CICI framework comprises three dimensions-context, implementation and setting-which interact with one another and with the intervention dimension. Context comprises seven domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, ethical, legal, political); implementation consists of five domains (i.e., implementation theory, process, strategies, agents and outcomes); setting refers to the specific physical location, in which the intervention is put into practise. The intervention and the way it is implemented in a given setting and context can occur on a micro, meso and macro level. Tools to operationalise the framework comprise a checklist, data extraction tools for qualitative and quantitative reviews and a consultation guide for applicability assessments.\n\n\nCONCLUSIONS\nThe CICI framework addresses and graphically presents context, implementation and setting in an integrated way. It aims at simplifying and structuring complexity in order to advance our understanding of whether and how interventions work. The framework can be applied in systematic reviews and HTA as well as primary research and facilitate communication among teams of researchers and with various stakeholders.",
"title": ""
},
{
"docid": "68810ad35e71ea7d080e7433e227e40e",
"text": "Mobile devices, ubiquitous in modern lifestyle, embody and provide convenient access to our digital lives. Being small and mobile, they are easily lost or stole, therefore require strong authentication to mitigate the risk of unauthorized access. Common knowledge-based mechanism like PIN or pattern, however, fail to scale with the high frequency but short duration of device interactions and ever increasing number of mobile devices carried simultaneously. To overcome these limitations, we present CORMORANT, an extensible framework for risk-aware multi-modal biometric authentication across multiple mobile devices that offers increased security and requires less user interaction.",
"title": ""
},
{
"docid": "306d5ba9eb3c9391eff7fac4e4c814ff",
"text": "Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people's activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of \"smart clothing\" system.",
"title": ""
},
{
"docid": "2559eeb2a4f2f58f82d215de134f32be",
"text": "We propose FCLT – a fully-correlational long-term tracker. The two main components of FCLT are a shortterm tracker which localizes the target in each frame and a detector which re-detects the target when it is lost. Both the short-term tracker and the detector are based on correlation filters. The detector exploits properties of the recent constrained filter learning and is able to re-detect the target in the whole image efficiently. A failure detection mechanism based on correlation response quality is proposed. The FCLT is tested on recent short-term and long-term benchmarks. It achieves state-of-the-art results on the short-term benchmarks and it outperforms the current best-performing tracker on the long-term benchmark by over 18%.",
"title": ""
},
{
"docid": "3cb2bfb076e9c21526ec82c43188def5",
"text": "Voice is projected to be the next input interface for portable devices. The increased use of audio interfaces can be mainly attributed to the success of speech and speaker recognition technologies. With these advances comes the risk of criminal threats where attackers are reportedly trying to access sensitive information using diverse voice spoofing techniques. Among them, replay attacks pose a real challenge to voice biometrics. This paper addresses the problem by proposing a deep learning architecture in tandem with low-level cepstral features. We investigate the use of a deep neural network (DNN) to discriminate between the different channel conditions available in the ASVSpoof 2017 dataset, namely recording, playback and session conditions. The high-level feature vectors derived from this network are used to discriminate between genuine and spoofed audio. Two kinds of low-level features are utilized: state-ofthe-art constant-Q cepstral coefficients (CQCC), and our proposed high-frequency cepstral coefficients (HFCC) that derive from the high-frequency spectrum of the audio. The fusion of both features proved to be effective in generalizing well across diverse replay attacks seen in the evaluation of the ASVSpoof 2017 challenge, with an equal error rate of 11.5%, that is 53% better than the baseline Gaussian Mixture Model (GMM) applied on CQCC.",
"title": ""
}
] |
scidocsrr
|
24d3cd7173712d836ffeebb8d32e8c99
|
Product Barcode and Expiry Date Detection for the Visually Impaired Using a Smartphone
|
[
{
"docid": "e8f33b4e500d8299aa803e72298d52ab",
"text": "While there are many barcode readers available for identifying products in a supermarket or at home on mobile phones (e.g., Red Laser iPhone app), such readers are inaccessible to blind or visually impaired persons because of their reliance on visual feedback from the user to center the barcode in the camera's field of view. We describe a mobile phone application that guides a visually impaired user to the barcode on a package in real-time using the phone's built-in video camera. Once the barcode is located by the system, the user is prompted with audio signals to bring the camera closer to the barcode until it can be resolved by the camera, which is then decoded and the corresponding product information read aloud using text-to-speech. Experiments with a blind volunteer demonstrate proof of concept of our system, which allowed the volunteer to locate barcodes which were then translated to product information that was announced to the user. We successfully tested a series of common products, as well as user-generated barcodes labeling household items that may not come with barcodes.",
"title": ""
}
] |
[
{
"docid": "e33080761e4ece057f455148c7329d5e",
"text": "This paper compares the utilization of ConceptNet and WordNet in query expansion. Spreading activation selects candidate terms for query expansion from these two resources. Three measures including discrimination ability, concept diversity, and retrieval performance are used for comparisons. The topics and document collections in the ad hoc track of TREC-6, TREC-7 and TREC-8 are adopted in the experiments. The results show that ConceptNet and WordNet are complementary. Queries expanded with WordNet have higher discrimination ability. In contrast, queries expanded with ConceptNet have higher concept diversity. The performance of queries expanded by selecting the candidate terms from ConceptNet and WordNet outperforms that of queries without expansion, and queries expanded with a single resource.",
"title": ""
},
{
"docid": "40e06996a22e1de4220a09e65ac1a04d",
"text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.",
"title": ""
},
{
"docid": "405cd35764b8ae0b380e85a58a9714bf",
"text": "This work is aimed at modeling, designing and developing an egg incubator system that is able to incubate various types of egg within the temperature range of 35 – 40 0 C. This system uses temperature and humidity sensors that can measure the condition of the incubator and automatically change to the suitable condition for the egg. Extreme variations in incubation temperature affect the embryo and ultimately, post hatch performance. In this work, electric bulbs were used to give the suitable temperature to the egg whereas water and controlling fan were used to ensure that humidity and ventilation were in good condition. LCD is used to display status condition of the incubator and an interface (Keypad) is provided to key in the appropriate temperature range for the egg. To ensure that all part of the eggs was heated by the lamp, DC motor was used to rotate iron rod at the bottom side and automatically change position of the egg. The entire element is controlled using AT89C52 Microcontroller. The temperature of the incubator is maintained at the normal temperature using PID controller implemented in microcontroller. Mathematical model of the incubator, actuator and PID controller were developed. Controller design based on the models was developed using Matlab Simulink. The models were validated through simulation and the Zeigler-Nichol tuning method was adopted as the tuning technique for varying the temperature control parameters of the PID controller in order to achieve a desirable transient response of the system when subjected to a unit step input. After several assumptions and simulations, a set of optimal parameters were obtained at the result of the third test that exhibited a commendable improvement in the overshoot, rise time, peak time and settling time thus improving the robustness and stability of the system. Keyword: Egg Incubator System, AT89C52 Microcontroller, PID Controller, Temperature Sensor.",
"title": ""
},
{
"docid": "c89b94565b7071420017deae01295e23",
"text": "Using cross-sectional data from three waves of the Youth Tobacco Policy Study, which examines the impact of the UK's Tobacco Advertising and Promotion Act (TAPA) on adolescent smoking behaviour, we examined normative pathways between tobacco marketing awareness and smoking intentions. The sample comprised 1121 adolescents in Wave 2 (pre-ban), 1123 in Wave 3 (mid-ban) and 1159 in Wave 4 (post-ban). Structural equation modelling was used to assess the direct effect of tobacco advertising and promotion on intentions at each wave, and also the indirect effect, mediated through normative influences. Pre-ban, higher levels of awareness of advertising and promotion were independently associated with higher levels of perceived sibling approval which, in turn, was positively related to intentions. Independent paths from perceived prevalence and benefits fully mediated the effects of advertising and promotion awareness on intentions mid- and post-ban. Advertising awareness indirectly affected intentions via the interaction between perceived prevalence and benefits pre-ban, whereas the indirect effect on intentions of advertising and promotion awareness was mediated by the interaction of perceived prevalence and benefits mid-ban. Our findings indicate that policy measures such as the TAPA can significantly reduce adolescents' smoking intentions by signifying smoking to be less normative and socially unacceptable.",
"title": ""
},
{
"docid": "d805dc116db48b644b18e409dda3976e",
"text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.",
"title": ""
},
{
"docid": "67067043e630f3ef5d466c66a88b72ab",
"text": "This paper reports an LC-based digitally controlled oscillator (DCO) using novel varactor pairs. Proposed DCO has high frequency resolution with low phase noise in 5.9 GHz. The DCO exploits the difference between the accumulation region capacitance and inversion region capacitance of two PMOS varactors. The novel varactor pairs make much smaller switchable capacitance than those of other approaches, and hence the DCO achieves the high frequency resolution and low phase noise. Also, identical sizes of PMOS varactor make them robust from process variation. The DCO implemented in 0.18 um CMOS process operates from 5.7 GHz to 6.3 GHz with 14 kHz frequency resolution which indicates the unit switchable capacitance of 3.5 aF. The designed DCO achieves a low phase-noise of −117 dBc/Hz at 1 MHz offset.",
"title": ""
},
{
"docid": "c58d0f8105b1b8a439b90fd1d366a87c",
"text": "Let F be a totally real field and χ an abelian totally odd character of F . In 1988, Gross stated a p-adic analogue of Stark’s conjecture that relates the value of the derivative of the p-adic L-function associated to χ and the p-adic logarithm of a p-unit in the extension of F cut out by χ. In this paper we prove Gross’s conjecture when F is a real quadratic field and χ is a narrow ring class character. The main result also applies to general totally real fields for which Leopoldt’s conjecture holds, assuming that either there are at least two primes above p in F , or that a certain condition relating the L invariants of χ and χ−1 holds. This condition on L -invariants is always satisfied when χ is quadratic.",
"title": ""
},
{
"docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7",
"text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.",
"title": ""
},
{
"docid": "3668b5394b68a6dfc82951121ebdda8d",
"text": "Now a day the usage of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. Various techniques like classification, clustering and apriori of web mining will be integrated to represent the sequence of operations in credit card transaction processing and show how it can be used for the detection of frauds. Initially, web mining techniques trained with the normal behaviour of a cardholder. If an incoming credit card transaction is not accepted by the web mining model with sufficiently high probability, it is considered to be fraudulent. At the same time, the system will try to ensure that genuine transactions will not be rejected. Using data from a credit card issuer, a web mining model based fraud detection system will be trained on a large sample of labelled credit card account transactions and tested on a holdout data set that consisted of all account activity. Web mining techniques can be trained on examples of fraud due to lost cards, stolen cards, application fraud, counterfeit fraud, and mail-order fraud. The proposed system will be able to detect frauds by considering a cardholder‟s spending habit without its significance. Usually, the details of items purchased in individual transactions are not known to any Fraud Detection System. The proposed system will be an ideal choice for addressing this problem of current fraud detection system. Another important advantage of proposed system will be a drastic reduction in the number of False Positives transactions. FDS module of proposed system will receive the card details and the value of purchase to verify, whether the transaction is genuine or not. If the Fraud Detection System module will confirm the transaction to be of fraud, it will raise an alarm, and the transaction will be declined.",
"title": ""
},
{
"docid": "f9b7965888e180c6b07764dae8433a9d",
"text": "Job recommender systems are designed to suggest a ranked list of jobs that could be associated with employee's interest. Most of existing systems use only one approach to make recommendation for all employees, while a specific method normally is good enough for a group of employees. Therefore, this study proposes an adaptive solution to make job recommendation for different groups of user. The proposed methods are based on employee clustering. Firstly, we group employees into different clusters. Then, we select a suitable method for each user cluster based on empirical evaluation. The proposed methods include CB-Plus, CF-jFilter and HyR-jFilter have applied for different three clusters. Empirical results show that our proposed methods is outperformed than traditional methods.",
"title": ""
},
{
"docid": "08ab7142ae035c3594d3f3ae339d3e27",
"text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.",
"title": ""
},
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
},
{
"docid": "1e934aef7999b592971b393e40395994",
"text": "Over recent years, as the popularity of mobile phone devices has increased, Short Message Service (SMS) has grown into a multi-billion dollars industry. At the same time, reduction in the cost of messaging services has resulted in growth in unsolicited commercial advertisements (spams) being sent to mobile phones. In parts of Asia, up to 30% of text messages were spam in 2012. Lack of real databases for SMS spams, short length of messages and limited features, and their informal language are the factors that may cause the established email filtering algorithms to underperform in their classification. In this project, a database of real SMS Spams from UCI Machine Learning repository is used, and after preprocessing and feature extraction, different machine learning techniques are applied to the database. Finally, the results are compared and the best algorithm for spam filtering for text messaging is introduced. Final simulation results using 10-fold cross validation shows the best classifier in this work reduces the overall error rate of best model in original paper citing this dataset by more than half.",
"title": ""
},
{
"docid": "10e24047026cc4a062b08fc28468bbff",
"text": "This comparative analysis of teacher-student interaction in two different instructional settings at the elementary-school level (18.3 hr in French immersion and 14.8 hr Japanese immersion) investigates the immediate effects of explicit correction, recasts, and prompts on learner uptake and repair. The results clearly show a predominant provision of recasts over prompts and explicit correction, regardless of instructional setting, but distinctively varied student uptake and repair patterns in relation to feedback type, with the largest proportion of repair resulting from prompts in French immersion and from recasts in Japanese immersion. Based on these findings and supported by an analysis of each instructional setting’s overall communicative orientation, we introduce the counterbalance hypothesis, which states that instructional activities and interactional feedback that act as a counterbalance to a classroom’s predominant communicative orientation are likely to prove more effective than instructional activities and interactional feedback that are congruent with its predominant communicative orientation.",
"title": ""
},
{
"docid": "0e8cde83260d6ca4d8b3099628c25fc2",
"text": "1Department of Molecular Virology, Immunology and Medical Genetics, The Ohio State University Medical Center, Columbus, Ohio, USA. 2Department of Physics, Pohang University of Science and Technology, Pohang, Korea. 3School of Interdisciplinary Bioscience and Bioengineering, Pohang, Korea. 4Physics Department, The Ohio State University, Columbus, Ohio, USA. 5These authors contributed equally to this work. e-mail: fishel.7@osu.edu",
"title": ""
},
{
"docid": "7e5cd1252d95bb095e7fabd54211fc38",
"text": "Interorganizational information systems, i.e., systems spanning more than a single organization, are proliferating as companies become aware of the potential of these systems to affect interorganizational interactions in terms of economic efficiency and strategic conduct. This new technology can have far-reaching impacts on the structure of entire industries. This article identifies two types of interorganizational information systems, information links and electronic markets. It then explores how economic models can be employed to study the implications of information links for the coordination of individual organizations with their customers and their suppliers, and the implications of electronic market systems for efficiency and competition in vertical markets. Finally, the strategic significance of interorganizational systems is addressed, and certain potential long-term impacts on the structure of markets, industries and organizations are discussed. This research was supported in part with funding from an Irvine Faculty Research Fellowship and from the National Science Foundation (Grant Number IRI-9015497). The author is grateful to the three anonymous referees for their valuable comments during the review process.",
"title": ""
},
{
"docid": "c1fc1a31d9f5033a7469796d1222aef3",
"text": "Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.",
"title": ""
},
{
"docid": "924146534d348e7a44970b1d78c97e9c",
"text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.",
"title": ""
},
{
"docid": "bba99d325be71a13de31a1c70447e530",
"text": "Search engine researchers typically depict search as the solitary activity of an individual searcher. In contrast, results from our critical-incident survey of 150 users on Amazon's Mechanical Turk service suggest that social interactions play an important role throughout the search process. Our main contribution is that we have integrated models from previous work in sensemaking and information seeking behavior to present a canonical social model of user activities before, during, and after search, suggesting where in the search process both explicitly and implicitly shared information may be valuable to individual searchers.",
"title": ""
},
{
"docid": "33465b87cdc917904d16eb9d6cb8fece",
"text": "An audio fingerprint is a compact content-based signature that summarizes an audio recording. Audio Fingerprinting technologies have attracted attention since they allow the identification of audio independently of its format and without the need of meta-data or watermark embedding. Other uses of fingerprinting include: integrity verification, watermark support and content-based audio retrieval. The different approaches to fingerprinting have been described with different rationales and terminology: Pattern matching, Multimedia (Music) Information Retrieval or Cryptography (Robust Hashing). In this paper, we review different techniques describing its functional blocks as parts of a common, unified framework.",
"title": ""
}
] |
scidocsrr
|
96da77dde8995da998b75c92797d1a0d
|
Robot-learning - Three case studies in robotics and machine learning
|
[
{
"docid": "a3f0e070fb1fecd686a92df8e0e97a36",
"text": "Given a set of observations, humans acquire concepts that organize those observations and use them in classifying future experiences. This type of concept formation can occur in the absence of a tutor and it can take place despite irrelevant and incomplete information. A reasonable model of such human concept learning should be both incremental and capable of handling the type of complex experiences that people encounter in the real world. In this paper, we review three previous models of incremental concept formation and then present CLASS1T, a model that extends these earlier systems. All of the models integrate the process of recognition and learning, and all can be viewed as carrying out search through the space of possible concept hierarchies. In an attempt to show that CLASSIT is a robust concept formation system, we also present some empirical studies of its behavior under a variety of conditions.",
"title": ""
}
] |
[
{
"docid": "f6fc0992624fd3b3e0ce7cc7fc411154",
"text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.",
"title": ""
},
{
"docid": "380d8a80d37eed2d3e70dcb016cbc498",
"text": "Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).",
"title": ""
},
{
"docid": "64389907530dd26392e037f1ab2d1da5",
"text": "Most current license plate (LP) detection and recognition approaches are evaluated on a small and usually unrepresentative dataset since there are no publicly available large diverse datasets. In this paper, we introduce CCPD, a large and comprehensive LP dataset. All images are taken manually by workers of a roadside parking management company and are annotated carefully. To our best knowledge, CCPD is the largest publicly available LP dataset to date with over 250k unique car images, and the only one provides vertices location annotations. With CCPD, we present a novel network model which can predict the bounding box and recognize the corresponding LP number simultaneously with high speed and accuracy. Through comparative experiments, we demonstrate our model outperforms current object detection and recognition approaches in both accuracy and speed. In real-world applications, our model recognizes LP numbers directly from relatively high-resolution images at over 61 fps and 98.5% accuracy.",
"title": ""
},
{
"docid": "a87b84354b876fbdedf250157a864866",
"text": "Chronic swimming training and phytotherapeutic supplementation are assumed to alleviate oxidative damage, and support cell survival in the brain. The effect of forced, chronic swimming training, and enriched lab chow containing 1% (w/w) dried nettle (Urtica dioica) leaf were investigated for oxidative stress, inflammation and neurotrophic markers in Wistar rat brains. The rats were divided into groups subjected to swimming training (6 weeks) or to nettle supplementation (8 weeks) or to a combination of these two treatments. The level of oxidative stress was measured by electron spin resonance (EPR), and by the concentration of carbonylated proteins. Nettle supplementation resulted in a decreased concentration of free radicals in both cerebellum and frontal lobe. Swimming, however, did not influence significantly the oxidative damage nor was it reflected in the carbonyl content. The protein content of nerve growth factor (NGF), and brain-derived neurotrophic factors (BDNF) was evaluated by E-Max ImmunoAssay in the cerebellum. No changes occurred either with exercise or nettle diet treatments. On the other hand, nuclear factor kappa B (NF-kappaB) binding activity to DNA increased with the combined effect of swimming training and nettle diet, while the activator protein1 (AP-1) DNA binding activity showed a more profound elevation in the nettle treated animals. The amount of c-Jun decreased by swimming training. In conclusion, the results suggest that both exercise and nettle influenced physiological brain functions. Nettle supplementation reduces the free radical concentration and increases the DNA binding of AP-1 in the brain. Nettle was found to be an effective antioxidant and possible antiapoptotic supplement promoting cell survival in the brain. Exercise, as a downregulator of c-Jun and in combined group as an upregulator of NF-kappaB, may play also a role in antiapoptotic processes, which is important after brain injury.",
"title": ""
},
{
"docid": "6ddfb4631928eec4247adf2ac033129e",
"text": "Facial micro-expression recognition is an upcoming area in computer vision research. Up until the recent emergence of the extensive CASMEII spontaneous micro-expression database, there were numerous obstacles faced in the elicitation and labeling of data involving facial micro-expressions. In this paper, we propose the Local Binary Patterns with Six Intersection Points (LBP-SIP) volumetric descriptor based on the three intersecting lines crossing over the center point. The proposed LBP-SIP reduces the redundancy in LBP-TOP patterns, providing a more compact and lightweight representation; leading to more efficient computational complexity. Furthermore, we also incorporated a Gaussian multi-resolution pyramid to our proposed approach by concatenating the patterns across all pyramid levels. Using an SVM classifier with leave-one-sample-out cross validation, we achieve the best recognition accuracy of 67.21%, surpassing the baseline performance with further computational efficiency.",
"title": ""
},
{
"docid": "20adf89d9301cdaf64d8bf684886de92",
"text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.",
"title": ""
},
{
"docid": "493dbc81bd00914bb70d0f7b378c8d5c",
"text": "We propose an ultrathin metallic structure to produce frequency-selective spoof surface plasmon polaritons (SPPs) in the microwave and terahertz frequencies. Designed on a thin dielectric substrate, the ultrathin metallic structure is composed of two oppositely oriented single-side corrugated strips, which are coupled to two double-side corrugated strips. The structure is fed by a traditional coplanar waveguide (CPW). To make a smooth conversion between the spatial modes in CPW and SPP modes, two transition sections are also designed. We fabricate and measure the frequency-selective spoof SPP structure in microwave frequencies. The measurement results show that the reflection coefficient is less than -10 dB with the transmission loss around 1.5 dB in the selective frequency band from 7 to 10 GHz, which are in good agreements with numerical simulations. The proposed structure can be used as an SPP filter with good performance of low loss, high transmission, and wide bandwidth in the selective frequency band.",
"title": ""
},
{
"docid": "2fcf4c56da05a86f50b3e5d0c9f33c70",
"text": "The localization of human faces in digital images is a fundamental step in the process of face recognition. This paper presents a shape comparison approach to achieve fast, accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on grayscale still images. The Hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image. The paper describes an efficient implementation, making this approach suitable for real-time applications. A two-step process that allows both coarse detection and exact localization of faces is presented. Experiments were performed on a large test set base and rated with a new validation measurement. c © In Proc. Third International Conference on Audioand Video-based Biometric Person Authentication, Springer, Lecture Notes in Computer Science, LNCS-2091, pp. 90–95, Halmstad, Sweden, 6–8 June 2001.",
"title": ""
},
{
"docid": "9bc90b182e3acd0fd0cfa10a7abc32f8",
"text": "The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.",
"title": ""
},
{
"docid": "2a2a9a8827142008dc50d3d72f017a9d",
"text": "Does the Great Fire Wall Cause Self-Censorship? The Effects of Perceived Internet Regulation and the Justification of Regulation Zhi-Jin Zhong, Tongchen Wang, Minting Huang, Article information: To cite this document: Zhi-Jin Zhong, Tongchen Wang, Minting Huang, \"Does the Great Fire Wall Cause Self-Censorship? The Effects of Perceived Internet Regulation and the Justification of Regulation\", Internet Research, https://doi.org/10.1108/IntR-07-2016-0204 Permanent link to this document: https://doi.org/10.1108/IntR-07-2016-0204",
"title": ""
},
{
"docid": "ac078f78fcf0f675c21a337f8e3b6f5f",
"text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712",
"title": ""
},
{
"docid": "549f8fe6d456a818c36976c7e47e4033",
"text": "Given the rapid proliferation of trajectory-based approaches to study clinical consequences to stress and potentially traumatic events (PTEs), there is a need to evaluate emerging findings. This review examined convergence/divergences across 54 studies in the nature and prevalence of response trajectories, and determined potential sources of bias to improve future research. Of the 67 cases that emerged from the 54 studies, the most consistently observed trajectories following PTEs were resilience (observed in: n = 63 cases), recovery (n = 49), chronic (n = 47), and delayed onset (n = 22). The resilience trajectory was the modal response across studies (average of 65.7% across populations, 95% CI [0.616, 0.698]), followed in prevalence by recovery (20.8% [0.162, 0.258]), chronicity (10.6%, [0.086, 0.127]), and delayed onset (8.9% [0.053, 0.133]). Sources of heterogeneity in estimates primarily resulted from substantive population differences rather than bias, which was observed when prospective data is lacking. Overall, prototypical trajectories have been identified across independent studies in relatively consistent proportions, with resilience being the modal response to adversity. Thus, trajectory models robustly identify clinically relevant patterns of response to potential trauma, and are important for studying determinants, consequences, and modifiers of course following potential trauma.",
"title": ""
},
{
"docid": "7df56d787a4eb94829b011e2cb65580b",
"text": "With the wide deployment of cloud computing in many business enterprises as well as science and engineering domains, high quality security services are increasingly critical for processing workflow applications with sensitive intermediate data. Unfortunately, most existing worklfow scheduling approaches disregard the security requirements of the intermediate data produced by workflows, and overlook the performance impact of encryption time of intermediate data on the start of subsequent workflow tasks. Furthermore, the idle time slots on resources, resulting from data dependencies among workflow tasks, have not been adequately exploited to mitigate the impact of data encryption time on workflows’ makespans and monetary cost. To address these issues, this paper presents a novel task-scheduling framework for security sensitive workflows with three novel features. First, we provide comprehensive theoretical analyses on how selectively duplicating a task’s predecessor tasks is helpful for preventing both the data transmission time and encryption time from delaying task’s start time. Then, we define workflow tasks’ latest finish time, and prove that tasks can be completed before tasks’ latest finish time by using cheapest resources to reduce monetary cost without delaying tasks’ successors’ start time and workflows’ makespans. Based on these analyses, we devise a novel scheduling appro ach with selective tasks duplication, named SOLID, incorporating two important phases: 1) task scheduling with selectively duplicating predecessor tasks to idle time slots on resources; and 2) intermediate data encrypting by effectively exploiting tasks’ laxity time. We evaluate our solution approach through rigorous performance evaluation study using both randomly generated workflows and some real-world workflow traces. Our results show that the proposed SOLID approach prevails over existing algorithms in terms of makespan, monetary costs and resource efficiency.",
"title": ""
},
{
"docid": "d01198e88f91a47a1777337d0db41939",
"text": "Ultra low quiescent, wide output current range low-dropout regulators (LDO) are in high demand in portable applications to extend battery lives. This paper presents a 500 nA quiescent, 0 to 100 mA load, 3.5–7 V input to 3 V output LDO in a digital 0.35 μm 2P3M CMOS technology. The challenges in designing with nano-ampere of quiescent current are discussed, namely the leakage, the parasitics, and the excessive DC gain. CMOS super source follower voltage buffer and input excessive gain reduction are then proposed. The LDO is internally compensated using Ahuja method with a minimum phase margin of 55° across all load conditions. The maximum transient voltage variation is less than 150 and 75 mV when used with 1 and 10 μF external capacitor. Compared with existing work, this LDO achieves the best transient flgure-of-merit with close to best dynamic current efficiency (maximum-to-quiescent current ratio).",
"title": ""
},
{
"docid": "2567835d4af183ff0d57c698cd7c0a39",
"text": "OBJECTIVE\nThis descriptive study explores motivation of toddlers who are typically developing to persist with challenging occupations.\n\n\nMETHOD\nThe persistence of 33 children, 12 to 19 months of age (M = 15.7 months), in functional play and self-feeding with a utensil was examined through videotape analysis of on-task behaviors.\n\n\nRESULTS\nA modest correlation was demonstrated between the percentages of on-task time in the two conditions (r = .44, p < .01). Although chronological age was not associated with persistence, participants' age-equivalent fine motor scores were correlated with persistence with challenging toys (r = .39, p < .03) but not with self-feeding with a utensil. Having an older sibling was associated with longer periods of functional play, t(32) = 3.02, p < .005, but the amount the parent urged the child to eat with a utensil was not associated with persistence in self-feeding.\n\n\nCONCLUSION\nThe modest association between on-task time for functional play and self-feeding with a utensil reveals that factors other than urge to meet perceptual motor challenges lead to children's persistence. The results reinforce the importance of considering not only challenging activities, but also the experienced meaning that elicits optimal effort and, thus, learning.",
"title": ""
},
{
"docid": "d8d99b9fdbe656accd430f9bc46736d4",
"text": "On the basis of previous research, the authors hypothesize that (a) person descriptive terms can be organized into the broad dimensions of agency and communion of which communion is the primary one; (b) the main distinction between these dimensions pertains to their profitability for the self (agency) vs. for other persons (communion); hence, agency is more desirable and important in the self-perspective, and communion is more desirable and important in the other-perspective; (c) self-other outcome dependency increases importance of another person's agency. Study 1 showed that a large number of trait names can be reduced to these broad dimensions, that communion comprises more item variance, and that agency is predicted by self-profitability and communion by other-profitability. Studies 2 and 3 showed that agency is more relevant and desired for self, and communion is more relevant and desired for others. Study 4 showed that agency is more important in a close friend than an unrelated peer, and this difference is completely mediated by the perceived outcome dependency.",
"title": ""
},
{
"docid": "3d2e47ed90e8ff4dec54e85e4996c961",
"text": "Open source software encourages innovation by allowing users to extend the functionality of existing applications. Treeview is a popular application for the visualization of microarray data, but is closed-source and platform-specific, which limits both its current utility and suitability as a platform for further development. Java Treeview is an open-source, cross-platform rewrite that handles very large datasets well, and supports extensions to the file format that allow the results of additional analysis to be visualized and compared. The combination of a general file format and open source makes Java Treeview an attractive choice for solving a class of visualization problems. An applet version is also available that can be used on any website with no special server-side setup.",
"title": ""
},
{
"docid": "9c1267f42c32f853db912a08eddb8972",
"text": "IBM's Physical Analytics Integrated Data Repository and Services (PAIRS) is a geospatial Big Data service. PAIRS contains a massive amount of curated geospatial (or more precisely spatio-temporal) data from a large number of public and private data resources, and also supports user contributed data layers. PAIRS offers an easy-to-use platform for both rapid assembly and retrieval of geospatial datasets or performing complex analytics, lowering time-to-discovery significantly by reducing the data curation and management burden. In this paper, we review recent progress with PAIRS and showcase a few exemplary analytical applications which the authors are able to build with relative ease leveraging this technology.",
"title": ""
},
{
"docid": "7ac9b7bc77ffa229d448b2234857dca8",
"text": "How do neurons in a decision circuit integrate time-varying signals, in favor of or against alternative choice options? To address this question, we used a recurrent neural circuit model to simulate an experiment in which monkeys performed a direction-discrimination task on a visual motion stimulus. In a recent study, it was found that brief pulses of motion perturbed neural activity in the lateral intraparietal area (LIP), and exerted corresponding effects on the monkey's choices and response times. Our model reproduces the behavioral observations and replicates LIP activity which, depending on whether the direction of the pulse is the same or opposite to that of a preferred motion stimulus, increases or decreases persistently over a few hundred milliseconds. Furthermore, our model accounts for the observation that the pulse exerts a weaker influence on LIP neuronal responses when the pulse is late relative to motion stimulus onset. We show that this violation of time-shift invariance (TSI) is consistent with a recurrent circuit mechanism of time integration. We further examine time integration using two consecutive pulses of the same or opposite motion directions. The induced changes in the performance are not additive, and the second of the paired pulses is less effective than its standalone impact, a prediction that is experimentally testable. Taken together, these findings lend further support for an attractor network model of time integration in perceptual decision making.",
"title": ""
}
] |
scidocsrr
|
0efd678489069e1548928ac13e37b600
|
Hidden Markov Model Based Events Detection in Soccer Video
|
[
{
"docid": "55bee435842ff69aec83c280d8ba506b",
"text": "We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions.",
"title": ""
},
{
"docid": "bc4791523b11a235d0b1c9e660ea1139",
"text": "In this paper, we present a novel system and effective algorithms for soccer video segmentation. The output, about whether the ball is in play, reveals high-level structure of the content. The first step is to classify each sample frame into 3 kinds of view using a unique domain-specific feature, grass-area-ratio. Here the grass value and classification rules are learned and automatically adjusted to each new clip. Then heuristic rules are used in processing the view label sequence, and obtain play/break status of the game. The results provide good basis for detailed content analysis in next step. We also show that lowlevel features and mid-level view classes can be combined to extract more information about the game, via the example of detecting grass orientation in the field. The results are evaluated under different metrics intended for different applications; the best result in segmentation is 86.5%.",
"title": ""
}
] |
[
{
"docid": "bf9ba92f1c7aa2ae4ed32dd270552eb0",
"text": "Video-based person re-identification (re-id) is a central application in surveillance systems with significant concern in security. Matching persons across disjoint camera views in their video fragments is inherently challenging due to the large visual variations and uncontrolled frame rates. There are two steps crucial to person re-id, namely discriminative feature learning and metric learning. However, existing approaches consider the two steps independently, and they do not make full use of the temporal and spatial information in videos. In this paper, we propose a Siamese attention architecture that jointly learns spatiotemporal video representations and their similarity metrics. The network extracts local convolutional features from regions of each frame, and enhance their discriminative capability by focusing on distinct regions when measuring the similarity with another pedestrian video. The attention mechanism is embedded into spatial gated recurrent units to selectively propagate relevant features and memorize their spatial dependencies through the network. The model essentially learns which parts (where) from which frames (when) are relevant and distinctive for matching persons and attaches higher importance therein. The proposed Siamese model is end-to-end trainable to jointly learn comparable hidden representations for paired pedestrian videos and their similarity value. Extensive experiments on three benchmark datasets show the effectiveness of each component of the proposed deep network while outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "7a6a1bf378f5bdfc6c373dc55cf0dabd",
"text": "In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.",
"title": ""
},
{
"docid": "6198aab5c0e940ce5e85a30f27126014",
"text": "Fog computing is a new paradigm of distributed computing that extends the cloud's capability to the edge of the network. It can provide better quality of service, and is particularly suitable for IoT applications, which generally have stringent requirements on latency and reliability. In this article, we consider the flight security and safety of UAVs, which act as fog nodes in an airborne fog computing system. In particular, we propose a method to detect GPS spoofing based on the monocular camera and IMU sensor of a UAV. We also present an image localization approach to support UAV autonomous return using error reduction based on ORB feature detection and matching. The methods are demonstrated by experiments using a DJI Phantom 4 drone.",
"title": ""
},
{
"docid": "62769e2979d1a1181ffebedc18f3783a",
"text": "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the transhumanist dogma that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. Preliminaries Substrate-independence is a common assumption in the philosophy of mind. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium; silicon-based processors inside a computer could in principle do the trick as well. Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall take it as a given here. The argument we shall present does not, however, depend on any strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true analytic (either analytically or metaphysically) just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations (including passing Turing tests etc.). We only need the weaker assumption that it would suffice (for generation of subjective experiences) if the computational processes of a human brain were structurally replicated in suitably fine-grained detail, such as on the level of individual neurons. This highly attenuated version of substrate-independence is widely accepted. At the current stage of technology, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Several authors argue that this stage may be only a few decades away (Drexler 1985; Bostrom 1998; Kurzweil 1999; Moravec 1999). Yet for present purposes we need not make any assumptions about the time-scale. The argument we shall present works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints. Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. Since we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those theoretical constraints’ that in our current understanding limit the information processing density that can be attained in a given lump of matter. But we can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10 instructions per second (Drexler 1992). Another author gives a rough performance estimate of 10 Are You Living In a Computer Simulation? 2 operations per second for a computer with a mass on order of large planet (Bradbury 2000). The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we already understand (contrast enhancement in the retina), yields a figure of ~10 operations per second for the entire human brain (Moravee 1989). An alternative estimate, based the number of synapses in the brain and their firing frequency gives a figure of ~10l6-10l7 operations per second (Bostrom 1998). Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dentritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its components. One would therefore expect a substantial increase in efficiency when using more reliable and versatile non-biological processors. If the environment is included in the simulation, this will require additional computing power. How much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible (unless radically new physics is discovered). But in order to get a realistic simulation of human experience, much less is needed — only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations indeed: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated. Microscopic phenomena could likely be filled in on an ad hoc basis. What you see when you look in an electron microscope needs to look unsuspicious, but you have usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we set up systems that are designed to harness unobserved microscopic phenomena operating according to known principles to get results that we are able to independently verify. The paradigmatic instance is computers. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. But this is no big problem, since our current computing power is negligible by posthuman standards. In general, the posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times, Thus, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director can skip back a few seconds and rerun the simulation in a way that avoids the problem. It thus seems plausible that the main computational cost consists in simulating organic brains down to the neuronal or sub-neuronal level (although as we build more and faster computers, the cost of simulating our machines might eventually come to dominate the cost of simulating nervous systems). While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use,~1O-10 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for the argument we are pursuing here. We noted that a rough approximation of the computational power of a single planetary-mass computer is 10 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. Such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) in less than 10 seconds. (A posthuman civilization may eventually build an astronomical number of such computers.) We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error 3 Bostrom (2001): Are You Living In a Computer Simulation? in all our guesstimates. • Posthuman civilizations would have enough computing power to run hugely many ancestorsimulations even while using only a tiny fraction of their resources for that purpose. The Simulation Argument The core of the argument that this paper presents can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestorsimulations, then how come you are not living in such a simulation? We shall develop this idea into a rigorous argument. Let us introduce the following notation: DOOM: Humanity goes extinct before reaching the posthuman stage SIM: You are living in a simulation N: Average number of ancestor-simulations run by a posthuman civilization H: Average number of individuals that have lived in a civilization before it reaches a posthuman stage The expected fraction of all observers with human-type experiences that live in simulations is then fsim= [l-P(DOOM)]×N×H ([1-P(DOOM)]×N×H",
"title": ""
},
{
"docid": "537d47c4bb23d9b60b164d747cb54cd9",
"text": "Comprehending computer programs is one of the core software engineering activities. Software comprehension is required when a programmer maintains, reuses, migrates, reengineers, or enhances software systems. Due to this, a large amount of research has been carried out, in an attempt to guide and support software engineers in this process. Several cognitive models of program comprehension have been suggested, which attempt to explain how a software engineer goes about the process of understanding code. However, research has suggested that there is no one ‘all encompassing’ cognitive model that can explain the behavior of ‘all’ programmers, and that it is more likely that programmers, depending on the particular problem, will swap between models (Letovsky, 1986). This paper identifies the key components of program comprehension models, and attempts to evaluate currently accepted models in this framework. It also highlights the commonalities, conflicts, and gaps between models, and presents possibilities for future research, based on its findings.",
"title": ""
},
{
"docid": "406b1d13ecc9c9097079c8a24c15a332",
"text": "We propose an automated breast cancer triage CAD system using machine vision on low-cost, portable ultrasound imaging devices. We demonstrate that the triage CAD software can effectively analyze images captured by minimally-trained operators and output one of three assessments - benign, probably benign (6-month follow-up recommended) and suspicious (biopsy recommended). This system opens up the possibility of offering practical, cost-effective breast cancer diagnosis for symptomatic women in economically developing countries.",
"title": ""
},
{
"docid": "0e77fc836c5f208ff0b4cc85f5ba1ec1",
"text": "We introduce and develop a declarative framework for entity linking and, in particular, for entity resolution. As in some earlier approaches, our framework is based on a systematic use of constraints. However, the constraints we adopt are link-to-source constraints, unlike in earlier approaches where source-to-link constraints were used to dictate how to generate links. Our approach makes it possible to focus entirely on the intended properties of the outcome of entity linking, thus separating the constraints from any procedure of how to achieve that outcome. The core language consists of link-to-source constraints that specify the desired properties of a link relation in terms of source relations and built-in predicates such as similarity measures. A key feature of the link-to-source constraints is that they employ disjunction, which enables the declarative listing of all the reasons two entities should be linked. We also consider extensions of the core language that capture collective entity resolution by allowing interdependencies among the link relations.\n We identify a class of “good” solutions for entity-linking specifications, which we call maximum-value solutions and which capture the strength of a link by counting the reasons that justify it. We study natural algorithmic problems associated with these solutions, including the problem of enumerating the “good” solutions and the problem of finding the certain links, which are the links that appear in every “good” solution. We show that these problems are tractable for the core language but may become intractable once we allow interdependencies among the link relations. We also make some surprising connections between our declarative framework, which is deterministic, and probabilistic approaches such as ones based on Markov Logic Networks.",
"title": ""
},
{
"docid": "257a2a2fb23674035068cdb8716f041d",
"text": "One source of EEG data quality deterioration is noise. The others are artifacts, such as the eye blinking, oculogyration, heart beat, or muscle activity. All these factors mentioned above contribute to the disappointing and poor quality of EEG signals. There are some solutions which allow increase of this signals quality. One of them is Common Spatial Patterns. Some scientific papers report that CSP can only be effectively used if there are many electrodes available. The aim of this paper is to use CSP method applied in the process of creating a brain computer interface in order to find out if there are any benefits of using this method in 3 channels BCI system.",
"title": ""
},
{
"docid": "01a70ee73571e848575ed992c1a3a578",
"text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.",
"title": ""
},
{
"docid": "052eb9b25a2efa0c79b65c32c48c7d03",
"text": "The advent of high-resolution digital cameras and sophisticated multi-view stereo algorithms offers the promise of unprecedented geometric fidelity in image-based modeling tasks, but it also puts unprecedented demands on camera calibration to fulfill these promises. This paper presents a novel approach to camera calibration where top-down information from rough camera parameter estimates and the output of a multi-view-stereo system on scaled-down input images is used to effectively guide the search for additional image correspondences and significantly improve camera calibration parameters using a standard bundle adjustment algorithm (Lourakis and Argyros 2008). The proposed method has been tested on six real datasets including objects without salient features for which image correspondences cannot be found in a purely bottom-up fashion, and objects with high curvature and thin structures that are lost in visual hull construction even with small errors in camera parameters. Three different methods have been used to qualitatively assess the improvements of the camera parameters. The implementation of the proposed algorithm is publicly available at Furukawa and Ponce (2008b).",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "1351b9d778da2821362a1b4caa35e7e4",
"text": "Though designing a data warehouse requires techniques completely different from those adopted for operational systems, no significant effort has been made so far to develop a complete and consistent design methodology for data warehouses. In this paper we outline a general methodological framework for data warehouse design, based on our Dimensional Fact Model (DFM). After analyzing the existing information system and collecting the user requirements, conceptual design is carried out semi-automatically starting from the operational database scheme. A workload is then characterized in terms of data volumes and expected queries, to be used as the input of the logical and physical design phases whose output is the final scheme for the data warehouse.",
"title": ""
},
{
"docid": "354089f03ce4b80deb11f0d8c60efc44",
"text": "Digitization of music has led to easier access to different forms music across the globe. Increasing work pressure denies the necessary time to listen and evaluate music for a creation of a personal music library. One solution might be developing a music search engine or recommendation system based on different moods. In fact mood label is considered as an emerging metadata in the digital music libraries and online music repositories. In this paper, we proposed mood taxonomy for Hindi songs and prepared a mood annotated lyrics corpus based on this taxonomy. We also annotated lyrics with positive and negative polarity. Instead of adopting a traditional approach to music mood classification based solely on audio features, the present study describes a mood classification system from lyrics as well by combining a wide range of semantic and stylistic features extracted from textual lyrics. We also developed a supervised system to identify the sentiment of the Hindi song lyrics based on the above features. We achieved the maximum average F-measure of 68.30% and 38.49% for classifying the polarities and moods of the Hindi lyrics, respectively.",
"title": ""
},
{
"docid": "80db4fa970d0999a43d31d58e23444bb",
"text": "There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.",
"title": ""
},
{
"docid": "f6f8e32c2658c34a978baba8cdf99f89",
"text": "In recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the quadcopter. Skilled human videographers can easily spot good filming locations where the subject and its actions can be seen clearly in the resulting video footage, but translating this knowledge to a robot can be complex. We present an autonomous system implemented on a commercially available quadcopter that achieves this using only the monocular information and an accelerometer. Our system has two vantage point selection strategies: 1) a reactive approach, which moves the robot to a fixed location with respect to the human and 2) the combination of the reactive approach and a POMDP planner that considers the target’s movement intentions. We compare the behavior of these two approaches under different target movement scenarios. The results show that the POMDP planner obtains more stable footage with less quadcopter motion.",
"title": ""
},
{
"docid": "f7e5c139bc044683bd28840434212cf7",
"text": "Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.",
"title": ""
},
{
"docid": "3840b8c709a8b2780b3d4a1b56bd986b",
"text": "A new scheme to resolve the intra-cell pilot collision for machine-to-machine (M2M) communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot. The simulation results coincide well with the analysis. It is also shown that, compared with the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.",
"title": ""
}
] |
scidocsrr
|
394cac128eec7354e8015b10c6fb7b62
|
Concomitant loss of dynorphin, NARP, and orexin in narcolepsy.
|
[
{
"docid": "9cf26cd287cd6bddb11dc5dc46c4ba7a",
"text": "Narcolepsy is a disabling sleep disorder affecting humans and animals. It is characterized by daytime sleepiness, cataplexy, and striking transitions from wakefulness into rapid eye movement (REM) sleep. In this study, we used positional cloning to identify an autosomal recessive mutation responsible for this sleep disorder in a well-established canine model. We have determined that canine narcolepsy is caused by disruption of the hypocretin (orexin) receptor 2 gene (Hcrtr2). This result identifies hypocretins as major sleep-modulating neurotransmitters and opens novel potential therapeutic approaches for narcoleptic patients.",
"title": ""
}
] |
[
{
"docid": "a3e24b6438257176aabb4726c4eb6260",
"text": "We present a system for creating and viewing interactive exploded views of complex 3D models. In our approach, a 3D input model is organized into an explosion graph that encodes how parts explode with respect to each other. We present an automatic method for computing explosion graphs that takes into account part hierarchies in the input models and handles common classes of interlocking parts. Our system also includes an interface that allows users to interactively explore our exploded views using both direct controls and higher-level interaction modes.",
"title": ""
},
{
"docid": "e0e33d26cc65569e80213069cb5ad857",
"text": "Capsule Networks have great potential to tackle problems in structural biology because of their aention to hierarchical relationships. is paper describes the implementation and application of a Capsule Network architecture to the classication of RAS protein family structures on GPU-based computational resources. e proposed Capsule Network trained on 2D and 3D structural encodings can successfully classify HRAS and KRAS structures. e Capsule Network can also classify a protein-based dataset derived from a PSI-BLAST search on sequences of KRAS and HRAS mutations. Our results show an accuracy improvement compared to traditional convolutional networks, while improving interpretability through visualization of activation vectors.",
"title": ""
},
{
"docid": "6a32d9e43d7f4558fa6dbbc596ce4496",
"text": "Automatically mapping natural language into programming language semantics has always been a major and interesting challenge. In this paper, we approach such problem by carrying out mapping at syntactic level and then applying machine learning algorithms to derive an automatic translator of natural language questions into their associated SQL queries. For this purpose, we design a dataset of relational pairs containing syntactic trees of questions and queries and we encode them in Support Vector Machines by means of kernel functions. Pair classification experiments suggest that our approach is promising in deriving shared semantics between the languages above.",
"title": ""
},
{
"docid": "80faeaceefd3851b51feef2e50694ef7",
"text": "The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating in this research community, namely, subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. In fact, there are inherent relations between them. Subjectivity classification can prevent the sentiment classifier from considering irrelevant or even potentially misleading text. Document sentiment classification and opinion extraction have often involved word sentiment classification techniques. This survey discusses related issues and main approaches to these problems. 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "0ca3676df82502041647e3c5612b0ff2",
"text": "OBJECTIVE\nTo evaluate the effects of 6 months of pool exercise combined with a 6 session education program for patients with fibromyalgia syndrome (FM).\n\n\nMETHODS\nThe study population comprised 58 patients, randomized to a treatment or a control group. Patients were instructed to match the pool exercises to their threshold of pain and fatigue. The education focused on strategies for coping with symptoms and encouragement of physical activity. The primary outcome measurements were the total score of the Fibromyalgia Impact Questionnaire (FIQ) and the 6 min walk test, recorded at study start and after 6 mo. Several other tests and instruments assessing functional limitations, severity of symptoms, disabilities, and quality of life were also applied.\n\n\nRESULTS\nSignificant differences between the treatment group and the control group were found for the FIQ total score (p = 0.017) and the 6 min walk test (p < 0.0001). Significant differences were also found for physical function, grip strength, pain severity, social functioning, psychological distress, and quality of life.\n\n\nCONCLUSION\nThe results suggest that a 6 month program of exercises in a temperate pool combined with education will improve the consequences of FM.",
"title": ""
},
{
"docid": "3e3ed710b763885fa135bb09bf26d95f",
"text": "T he International Classification of Diseases, Tenth Revision , defines sudden cardiac death (SCD) as death due to any cardiac disease that occurs out of hospital, in an emergency department, or in an individual reported dead on arrival at a hospital. In addition, death must have occurred within 1 hour after the onset of symptoms. SCD may be due to ventricular tachycardia (VT)/ventricular fibrillation (VF), asystole, or nonarrhythmic causes. 1 For the purpose of this scientific statement on noninvasive risk stratification for primary prevention of SCD, SCD will specifically refer to death due to reversible ventricular tachyarrhythmias, because this is the focus of the risk stratification techniques to be discussed. Among patients with SCD, an overwhelming majority have some form of structural heart disease; this statement will be limited to risk stratification techniques for ischemic, dilated, and hypertrophic cardiomyopathies. Although other types of structural heart disease and inherited ion channel abnormalities are also associated with a risk for SCD, the risk stratification strategies and data in these entities are diverse and are beyond the scope of this document. The annual incidence of sudden arrhythmic deaths has been estimated between 184 000 and 462 000. The American Heart Association has promoted the concept of the \" chain of survival, \" which includes early access to medical care, early cardiopulmonary resuscitation, early defibrillation, and early advanced care. Many of these interventions have improved survival. Despite all of these advances, however, overall mortality from a cardiac arrest remains high, which under-The American Heart Association, the American College of Cardiology Foundation, and the Heart Rhythm Society make every effort to avoid any actual or potential conflicts of interest that may arise as a result of an outside relationship or a personal, professional, or business interest of a member of the writing panel. Specifically, all members of the writing group are required to complete and submit a Disclosure Questionnaire showing all such relationships that might be perceived as real or potential conflicts of interest.or distribution of this document are not permitted without the express permission of the American Heart Association. Instructions for obtaining permission are located at http://www.americanheart.org/presenter.jhtml? identifierϭ4431. A link to the \" Permission Request Form \" appears on the right side of the page.",
"title": ""
},
{
"docid": "3a32fe66af2e99f3601aae71dc9b64c2",
"text": "Low-power wide-area networking (LPWAN) technologies are capable of supporting a large number of Internet of Things (IoT) use cases. While several LPWAN technologies exist, Long Range (LoRa) and its network architecture LoRaWAN, is currently the most adopted technology. LoRa provides a range of physical layer communication settings, such as bandwidth, spreading factor, coding rate, and transmission frequency. These settings impact throughput, reliability, and communication range. As IoT use cases result in varying communication patterns, it is essential to analyze how LoRa's different communication settings impact on real IoT use cases. In this paper, we analyze the impact of LoRa's communication settings on four IoT use cases, e.g. smart metering, smart parking, smart street lighting, and vehicle fleet tracking. Our results demonstrate that the setting corresponding to the fastest data rate achieves up to 380% higher packet delivery ratio and uses 0.004 times the energy compared to other evaluated settings, while being suitable to support the IoT use cases presented here. However, the setting covers a smaller communication area compared to the slow data rate settings. Moreover, we modified the Aloha-based channel access mechanism used by LoRaWAN and our results demonstrate that the modified channel access positively impacts the performance of the different communication settings.",
"title": ""
},
{
"docid": "6a85b9ecb1aa3bbac2d7e05a79e865e4",
"text": "Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image-and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results.",
"title": ""
},
{
"docid": "7cfffa8e9d1e1fb39082c5aba75034b3",
"text": "BACKGROUND\nAttempted separation of craniopagus twins has continued to be associated with devastating results since the first partially successful separation with one surviving twin in 1952. To understand the factors that contribute to successful separation in the modern era of neuroimaging and modern surgical techniques, the authors reviewed and analyzed cases reported since 1995.\n\n\nMETHODS\nAll reported cases of craniopagus twin separation attempts from 1995 to 2015 were identified using PubMed (n = 19). In addition, the Internet was searched for additional unreported separation attempts (n = 5). The peer-reviewed cases were used to build a categorical database containing information on each twin pair, including sex; date of birth; date of surgery; multiple- versus single-stage surgery; angular versus vertical conjoining; nature of shared cerebral venous system; and the presence of other comorbidities identified as cardiovascular, genitourinary, and craniofacial. The data were analyzed to find factors associated with successful separation (survival of both twins at postoperative day 30).\n\n\nRESULTS\nVertical craniopagus is associated with successful separation (p < 0.001). No statistical significance was attributed to the nature of the shared cerebral venous drainage or the other variables examined. Multiple-stage operations and surgery before 12 months of age are associated with a trend toward statistical significance for successful separation.\n\n\nCONCLUSIONS\nThe authors' analysis indicates that vertical craniopagus twins have the highest likelihood of successful separation. Additional factors possibly associated with successful separation include the nature of the shared sinus system, surgery at a young age, and the use of staged separations.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "5ab17c802a11a7b7fb9d3190a7dbfa7b",
"text": "A CMOS active diode rectifier for wireless power transmission with proposed voltage-time-conversion (VTC) delay-locked loop (DLL) control suppresses reverse current by realizing zero-voltage switching (ZVS), regardless of AC input and process variations. The proposed circuit is implemented in a standard 0.18μm CMOS process using I/O MOSFETs, which corresponds to 0.35μm technology. The maximum power conversion efficiency of 78% is obtained at 231Ω load resistance.",
"title": ""
},
{
"docid": "acbac38a7de49bf1b6ad15abb007b601",
"text": "Our everyday environments are gradually becoming intelligent, facilitated both by technological development and user activities. Although large-scale intelligent environments are still rare in actual everyday use, they have been studied for quite a long time, and several user studies have been carried out. In this paper, we present a user-centric view of intelligent environments based on published research results and our own experiences from user studies with concepts and prototypes. We analyze user acceptance and users’ expectations that affect users’ willingness to start using intelligent environments and to continue using them. We discuss user experience of interacting with intelligent environments where physical and virtual elements are intertwined. Finally, we touch on the role of users in shaping their own intelligent environments instead of just using ready-made environments. People are not merely “using” the intelligent environments but they live in them, and they experience the environments via embedded services and new interaction tools as well as the physical and social environment. Intelligent environments should provide emotional as well as instrumental value to the people who live in them, and the environments should be trustworthy and controllable both by regular users and occasional visitors. Understanding user expectations and user experience in intelligent environments, OPEN ACCESS",
"title": ""
},
{
"docid": "2be66aab202c50a35c1e98fe16442ab7",
"text": "Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy has been the main focus of research and now there is a huge trend toward assessing the robustness of these models against adversarial attacks by evaluating the accuracy of these models under increasing levels of noisiness. In VQA, the attack can target the image and/or the proposed main question and yet there is a lack of proper analysis of this aspect of VQA. In this work, we propose a new framework that uses semantically relevant questions, dubbed basic questions, acting as noise to evaluate the robustness of VQA models. We hypothesize that as the similarity of a basic question to the main question decreases, the level of noise increases. So, to generate a reasonable noise level for a given main question, we rank a pool of basic questions based on their similarity with this main question. We cast this ranking problem as a LASSO optimization problem. We also propose a novel robustness measure Rscore and two large-scale question datasets, General Basic Question Dataset and Yes/No Basic Question Dataset in order to standardize robustness analysis of VQA models. We analyze the robustness of several state-of-the-art VQA models and show that attention-based VQA models are more robust than other methods in general. The main goal of this framework is to serve as a benchmark to help the community in building more accurate and robust VQA models.",
"title": ""
},
{
"docid": "5852690a7b314ce4be32f22e56b15370",
"text": "The rapid increase in data volumes and complexity of applied analytical tasks poses a big challenge for visualization solutions. It is important to keep the experience highly interactive, so that users stay engaged and can perform insightful data exploration. Query processing usually dominates the cost of visualization generation. Therefore, in order to achieve acceptable response times, one needs to utilize backend capabilities to the fullest and apply techniques, such as caching or prefetching. In this paper we discuss key data processing components in Tableau: the query processor, query caches, Tableau Data Engine [1, 2] and Data Server. Furthermore, we cover recent performance improvements related to the number and quality of remote queries, broader reuse of cached data, and application of inter and intra query parallelism.",
"title": ""
},
{
"docid": "38539b78662fa2088f2b7505b53b2232",
"text": "Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose defenses are quickly shown to be incorrect. We believe a large contributing factor is the difficulty of performing security evaluations. In this paper, we discuss the methodological foundations, review commonly accepted best practices, and suggest new methods for evaluating defenses to adversarial examples. We hope that both researchers developing defenses as well as readers and reviewers who wish to understand the completeness of an evaluation consider our advice in order to avoid common pitfalls.",
"title": ""
},
{
"docid": "094906bcd076ae3207ba04755851c73a",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "b0de8371b0f5bfcecd8370bb0fdac174",
"text": "We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis. We show that both hinge on the question of understanding the complexity of the following problem, which we call PosSLP; given a division-free straight-line program producing an integer N, decide whether N > 0. We show that PosSLP lies in the counting hierarchy, and combining our results with work of Tiwari, we show that the Euclidean traveling salesman problem lies in the counting hierarchy - the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE",
"title": ""
},
{
"docid": "69f3a41f7250377b2d99aa61249db37e",
"text": "In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.",
"title": ""
},
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
},
{
"docid": "72c79181572c836cb92aac8fe7a14c5d",
"text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).",
"title": ""
},
{
"docid": "1bdb77e437b2ef59540ed8bde0ae7dc7",
"text": "One of the basic tasks that a modern CAD system should be able to accomplish, is the generation of a 'fair' or 'visually pleasing' curve from given data points. Even when a very efficient curve scheme like B-splines is used, it is quite possible that for some data sets the resulting curve is not fair enough. In this case, some 'processing' of the geometry of the curve is required in order to obtain an acceptable curve. The current trends regarding geometry processing for curves are the following:",
"title": ""
}
] |
scidocsrr
|
cc7d41dda09b0327716173b0a03c3719
|
Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data
|
[
{
"docid": "3e06d3b5ca50bf4fcd9d354a149dd40c",
"text": "In this paper, the classification via sprepresentation and multitask learning is presented for target recognition in SAR image. To capture the characteristics of SAR image, a multidimensional generalization of the analytic signal, namely the monogenic signal, is employed. The original signal can be then orthogonally decomposed into three components: 1) local amplitude; 2) local phase; and 3) local orientation. Since the components represent the different kinds of information, it is beneficial by jointly considering them in a unifying framework. However, these components are infeasible to be directly utilized due to the high dimension and redundancy. To solve the problem, an intuitive idea is to define an augmented feature vector by concatenating the components. This strategy usually produces some information loss. To cover the shortage, this paper considers three components into different learning tasks, in which some common information can be shared. Specifically, the component-specific feature descriptor for each monogenic component is produced first. Inspired by the recent success of multitask learning, the resulting features are then fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks. The inference is reached in terms of the total reconstruction error accumulated from all tasks. The novelty of this paper includes 1) the development of three component-specific feature descriptors; 2) the introduction of multitask learning into sparse representation model; 3) the numerical implementation of proposed method; and 4) extensive comparative experimental studies on MSTAR SAR dataset, including target recognition under standard operating conditions, as well as extended operating conditions, and the capability of outliers rejection.",
"title": ""
},
{
"docid": "4bec71105c8dca3d0b48e99cdd4e809a",
"text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.",
"title": ""
}
] |
[
{
"docid": "06abf2a7c6d0c25cfe54422268300e58",
"text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.",
"title": ""
},
{
"docid": "bcb9886f4ba3651793581e021030cde2",
"text": "This study looked at the individual difference correlates of self-rated character strengths and virtues. In all, 280 adults completed a short 24-item measure of strengths, a short personality measure of the Big Five traits and a fluid intelligence test. The Cronbach alphas for the six higher order virtues were satisfactory but factor analysis did not confirm the a priori classification yielding five interpretable factors. These factors correlated significantly with personality and intelligence. Intelligence and neuroticism were correlated negatively with all the virtues, while extraversion and conscientiousness were positively correlated with all virtues. Structural equation modeling showed personality and religiousness moderated the effect of intelligence on the virtues. Extraversion and openness were the largest correlates of the virtues. The use of shortened measured in research is discussed.",
"title": ""
},
{
"docid": "102ed07783d46a8ebadcad4b30ccb3c8",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "e8e796774aa6e16ff022ab155237f402",
"text": "Mobile payment is the killer application in mobile commerce. We classify the payment methods according to several standards, analyze and point out the merits and drawbacks of each method. To enable future applications and technologies handle mobile payment, we provide a general layered framework and a new process for mobile payment. The framework is composed of load-bearing layer, network interface and core application platform layer, business layer, and decision-making layer. And it can be extended and improved by the developers. Then a pre-pay and account-based payment process is described. Our method has the advantages of low cost and technical requirement, high scalability and security.",
"title": ""
},
{
"docid": "dccb4e0d84d0863444a3e180a12c5778",
"text": "This paper describes a systems for emotion recognition and its application on the dataset from the AV+EC 2016 Emotion Recognition Challenge. The realized system was produced and submitted to the AV+EC 2016 evaluation, making use of all three modalities (audio, video, and physiological data). Our work primarily focused on features derived from audio. The original audio features were complement with bottleneck features and also text-based emotion recognition which is based on transcribing audio by an automatic speech recognition system and applying resources such as word embedding models and sentiment lexicons. Our multimodal fusion reached CCC=0.855 on dev set for arousal and 0.713 for valence. CCC on test set is 0.719 and 0.596 for arousal and valence respectively.",
"title": ""
},
{
"docid": "7182dfe75bc09df526da51cd5c8c8d20",
"text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.",
"title": ""
},
{
"docid": "367c3fd4401e30d4982509733d908d38",
"text": "Markov logic networks (MLNs) are a statistical relational model that consists of weighted firstorder clauses and generalizes first-order logic and Markov networks. The current state-of-the-art algorithm for learning MLN structure follows a top-down paradigm where many potential candidate structures are systematically generated without considering the data and then evaluated using a statistical measure of their fit to the data. Even though this existing algorithm outperforms an impressive array of benchmarks, its greedy search is susceptible to local maxima or plateaus. We present a novel algorithm for learning MLN structure that follows a more bottom-up approach to address this problem. Our algorithm uses a \"propositional\" Markov network learning method to construct \"template\" networks that guide the construction of candidate clauses. Our algorithm significantly improves accuracy and learning time over the existing topdown approach in three real-world domains.",
"title": ""
},
{
"docid": "692207fdd7e27a04924000648f8b1bbf",
"text": "Many animals, on air, water, or land, navigate in three-dimensional (3D) environments, yet it remains unclear how brain circuits encode the animal's 3D position. We recorded single neurons in freely flying bats, using a wireless neural-telemetry system, and studied how hippocampal place cells encode 3D volumetric space during flight. Individual place cells were active in confined 3D volumes, and in >90% of the neurons, all three axes were encoded with similar resolution. The 3D place fields from different neurons spanned different locations and collectively represented uniformly the available space in the room. Theta rhythmicity was absent in the firing patterns of 3D place cells. These results suggest that the bat hippocampus represents 3D volumetric space by a uniform and nearly isotropic rate code.",
"title": ""
},
{
"docid": "019ee0840b91f97a3acc3411edadcade",
"text": "Despite the many solutions proposed by industry and the research community to address phishing attacks, this problem continues to cause enormous damage. Because of our inability to deter phishing attacks, the research community needs to develop new approaches to anti-phishing solutions. Most of today's anti-phishing technologies focus on automatically detecting and preventing phishing attacks. While automation makes anti-phishing tools user-friendly, automation also makes them suffer from false positives, false negatives, and various practical hurdles. As a result, attackers often find simple ways to escape automatic detection.\n This paper presents iTrustPage - an anti-phishing tool that does not rely completely on automation to detect phishing. Instead, iTrustPage relies on user input and external repositories of information to prevent users from filling out phishing Web forms. With iTrustPage, users help to decide whether or not a Web page is legitimate. Because iTrustPage is user-assisted, iTrustPage avoids the false positives and the false negatives associated with automatic phishing detection. We implemented iTrustPage as a downloadable extension to FireFox. After being featured on the Mozilla website for FireFox extensions, iTrustPage was downloaded by more than 5,000 users in a two week period. We present an analysis of our tool's effectiveness and ease of use based on our examination of usage logs collected from the 2,050 users who used iTrustPage for more than two weeks. Based on these logs, we find that iTrustPage disrupts users on fewer than 2% of the pages they visit, and the number of disruptions decreases over time.",
"title": ""
},
{
"docid": "a6e84af8b1ba1d120e69c10f76eb7e2a",
"text": "Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which autoencoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the intractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method.",
"title": ""
},
{
"docid": "5271b52132e7c02991168934c172eb79",
"text": "Food recognition is an emerging topic in computer vision. The problem is being addressed especially in health-oriented systems where it is used as a support for food diary applications. The goal is to improve current food diaries, where the users have to manually insert their daily food intake, with an automatic recognition of the food type, quantity and consequent calories intake estimation. In addition to the classical recognition challenges, the food recognition problem is characterized by the absence of a rigid structure of the food and by large intra-class variations. To tackle such challenges, a food recognition system based on a committee classification is proposed. The aim is to provide a system capable of automatically choosing the optimal features for food recognition out of the existing plethora of available ones (e.g., color, texture, etc.). Following this idea, each committee member, i.e., an Extreme Learning Machine, is trained to specialize on a single feature type. Then, a Structural Support Vector Machine is exploited to produce the final ranking of possible matches by filtering out the irrelevant features and thus merging only the relevant ones. Experimental results show that the proposed system outperforms state-of-the-art works on four publicly available benchmark datasets. © 2016 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "661b64324c3df08325e0a7798501bf8a",
"text": "We studied two species of Ceratogymna hornbills, the black-casqued hornbill, C. atrata, and the white-thighed hornbill, C. cylindricus, in the tropical forests of Cameroon, to understand their movement patterns and evaluate their effectiveness as seed dispersers. To estimate hornbill contribution to a particular tree species' seed shadow we combined data from movements, determined by radio-tracking, with data from seed passage trials. For 13 individuals tracked over 12 months, home range varied between 925 and 4,472 ha, a much larger area than reported for other African avian frugivores. Seed passage times ranged from 51 to 765 min, with C. atrata showing longer passage times than C. cylindricus, and larger seeds having longer gut retention times than smaller seeds. Combining these data, we estimated that seed shadows were extensive for the eight tree species examined, with approximately 80% of seeds moved more than 500 m from the parent plant. Maximum estimated dispersal distances for larger seeds were 6,919 and 3,558 m for C. atrata and C. cylindricus, respectively. The extent of hornbill seed shadows suggests that their influence in determining forest structure will likely increase as other larger mammalian dispersers are exterminated.",
"title": ""
},
{
"docid": "63b38f277675f52219c7d4c2d54f0076",
"text": "With the trend going on in ubiquitous computing, everything is going to be connected to the Internet and its data will be used for various progressive purposes, creating not only information from it, but also, knowledge and even wisdom. Internet of Things (IoT) becoming so pervasive that it is becoming important to integrate it with cloud computing because of the amount of data IoT's could generate and their requirement to have the privilege of virtual resources utilization and storage capacity, but also, to make it possible to create more usefulness from the data generated by IoT's and develop smart applications for the users. This IoT and cloud computing integration is referred to as Cloud of Things in this paper. IoT's and cloud computing integration is not that simple and bears some key issues. Those key issues along with their respective potential solutions have been highlighted in this paper.",
"title": ""
},
{
"docid": "216a65890d4256f56069e75879156550",
"text": "We address how listeners perceive temporal regularity in music performances, which are rich in temporal irregularities. A computational model is described in which a small system of internal self-sustained oscillations, operating at different periods with specific phase and period relations, entrains to the rhythms of music performances. Based on temporal expectancies embodied by the oscillations, the model predicts the categorization of temporally changing event intervals into discrete metrical categories, as well as the perceptual salience of deviations from these categories. The model’s predictions are tested in two experiments using piano performances of the same music with different phrase structure interpretations (Experiment 1) or different melodic interpretations (Experiment 2). The model successfully tracked temporal regularity amidst the temporal fluctuations found in the performances. The model’s sensitivity to performed deviations from its temporal expectations compared favorably with the performers’ structural (phrasal and melodic) intentions. Furthermore, the model tracked normal performances (with increased temporal variability) better than performances in which temporal fluctuations associated with individual voices were removed (with decreased variability). The small, systematic temporal irregularities characteristic of human performances (chord asynchronies) improved tracking, but randomly generated temporal irregularities did not. These findings suggest that perception of temporal regularity in complex musical sequences is based on temporal expectancies that adapt in response to temporally fluctuating input. © 2002 Cognitive Science Society, Inc. All rights reserved.",
"title": ""
},
{
"docid": "3d319572361f55dd4b91881dac2c9ace",
"text": "In this paper, a modular interleaved boost converter is first proposed by integrating a forward energy-delivering circuit with a voltage-doubler to achieve high step-up ratio and high efficiency for dc-microgrid applications. Then, steady-state analyses are made to show the merits of the proposed converter module. For closed-loop control design, the corresponding small-signal model is also derived. It is seen that, for higher power applications, more modules can be paralleled to increase the power rating and the dynamic performance. As an illustration, closed-loop control of a 450-W rating converter consisting of two paralleled modules with 24-V input and 200-V output is implemented for demonstration. Experimental results show that the modular high step-up boost converter can achieve an efficiency of 95.8% approximately.",
"title": ""
},
{
"docid": "46ecd1781e1ab5866fde77b3a24be06a",
"text": "Viral products and ideas are intuitively understood to grow through a person-to-person diffusion process analogous to the spread of an infectious disease; however, until recently it has been prohibitively difficult to directly observe purportedly viral events, and thus to rigorously quantify or characterize their structural properties. Here we propose a formal measure of what we label “structural virality” that interpolates between two extremes: content that gains its popularity through a single, large broadcast, and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption. We use this notion of structural virality to analyze a unique dataset of a billion diffusion events on Twitter, including the propagation of news stories, videos, images, and petitions. We find that the very largest observed events nearly always exhibit high structural virality, providing some of the first direct evidence that many of the most popular products and ideas grow through person-to-person diffusion. However, medium-sized events—having thousands of adopters—exhibit surprising structural diversity, and regularly grow via both broadcast and viral mechanisms. We find that these empirical results are largely consistent with a simple contagion model characterized by a low infection rate spreading on a scale-free network, reminiscent of previous work on the long-term persistence of computer viruses.",
"title": ""
},
{
"docid": "c3d25395aff2ec6039b21bd2415bcf1f",
"text": "A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today’s digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks.",
"title": ""
},
{
"docid": "59da726302c06abef243daee87cdeaa7",
"text": "The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes B Maria Paola Paladino mariapaola.paladino@unitn.it Francesco Ferrari francesco.ferrari-1@unitn.it Jolanda Jetten j.jetten@psy.uq.edu.au 1 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy 2 School of Psychology, The University of Queensland, St Lucia, QLD 4072, Australia underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.",
"title": ""
},
{
"docid": "0ae2a7701d4e75e7fa6891a8ca554273",
"text": "Multi-instance learning studies problems in which labels are assigned to bags that contain multiple instances. In these settings, the relations between instances and labels are usually ambiguous. In contrast, multi-task learning focuses on the output space in which an input sample is associated with multiple labels. In real world, a sample may be associated with multiple labels that are derived from observing multiple aspects of the problem. Thus many real world applications are naturally formulated as multi-instance multi-task (MIMT) problems. A common approach to MIMT is to solve it task-by-task independently under the multi-instance learning framework. On the other hand, convolutional neural networks (CNN) have demonstrated promising performance in single-instance single-label image classification tasks. However, how CNN deals with multi-instance multi-label tasks still remains an open problem. This is mainly due to the complex multiple-to-multiple relations between the input and output space. In this work, we propose a deep leaning model, known as multi-instance multi-task convolutional neural networks (MIMT-CNN), where a number of images representing a multi-task problem is taken as the inputs. Then a shared sub-CNN is connected with each input image to form instance representations. Those sub-CNN outputs are subsequently aggregated as inputs to additional convolutional layers and full connection layers to produce the ultimate multi-label predictions. This CNN model, through transfer learning from other domains, enables transfer of prior knowledge at image level learned from large single-label single-task data sets. The bag level representations in this model are hierarchically abstracted by multiple layers from instance level representations. Experimental results on mouse brain gene expression pattern annotation data show that the proposed MIMT-CNN model achieves superior performance.",
"title": ""
}
] |
scidocsrr
|
e8232fdbc76b706c82ea3ca9160806b0
|
A Semantic Relevance Based Neural Network for Text Summarization and Text Simplification
|
[
{
"docid": "8ad1213f0b85f57741dc80e57d83a24d",
"text": "Recently, many neural network models have been applied to Chinese word segmentation. However, such models focus more on collecting local information while long distance dependencies are not well learned. To integrate local features with long distance dependencies, we propose a dependency-based gated recursive neural network. Local features are first collected by bi-directional long short term memory network, then combined and refined to long distance dependencies via gated recursive neural network. Experimental results show that our model is a competitive model for Chinese word segmentation.",
"title": ""
},
{
"docid": "f75a1e5c9268a3a64daa94bb9c7f522d",
"text": "Many natural language generation tasks, such as abstractive summarization and text simplification, are paraphrase-orientated. In these tasks, copying and rewriting are two main writing modes. Most previous sequence-to-sequence (Seq2Seq) models use a single decoder and neglect this fact. In this paper, we develop a novel Seq2Seq model to fuse a copying decoder and a restricted generative decoder. The copying decoder finds the position to be copied based on a typical attention model. The generative decoder produces words limited in the source-specific vocabulary. To combine the two decoders and determine the final output, we develop a predictor to predict the mode of copying or rewriting. This predictor can be guided by the actual writing mode in the training data. We conduct extensive experiments on two different paraphrase datasets. The result shows that our model outperforms the stateof-the-art approaches in terms of both informativeness and language quality.",
"title": ""
},
{
"docid": "c8768e560af11068890cc097f1255474",
"text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"title": ""
},
{
"docid": "a0d34b1c003b7e88c2871deaaba761ed",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
}
] |
[
{
"docid": "8ba95e211a9d9637c049424dd1898a2c",
"text": "This paper deals with the 1st generation prototype of one-stage boost-half bridge (B-HB) series load resonant (SLR) soft-switching high-frequency (HF) inverter with a lossless snubbing capacitor for a variety of induction heating (IH) appliances. The B-HB SLR HF inverter treated here is based upon a simple dual SLR frequency selection strategy changed automatically in accordance with various metal materials of IH loads. In the first place, the triple SLR frequency (three times of switching frequency) operated B-HB inverter is demonstrated for IH of non-magnetic and low resistivity metallic pans/utensils fabricated by aluminum, copper and multi-layer of aluminum and stainless steel. In the second place, the fundamental resonant frequency (switching frequency) operated B-HB SLR HF inverter for IH is also demonstrated of magnetic and high resistivity metallic pans/utensils fabricated by iron, iron cast and stainless steel. Finally, the principle of operation control, implemental and inherent unique features of the B-HB SLR HF inverter employing automatically dual resonant frequency selection scheme for a variety of IH metallic pans/utensils is described from an experimental point of view, along with its operating performance. This 1st generation HF SLR inverter type built-in IH cooktop with two ranges/three ranges has been put into practice in home energy utilizations in all electricity residential systems.",
"title": ""
},
{
"docid": "4b8ee1a2e6d80a0674e2ff8f940d16f9",
"text": "Classification and knowledge extraction from complex spatiotemporal brain data such as EEG or fMRI is a complex challenge. A novel architecture named the NeuCube has been established in prior literature to address this. A number of key points in the implementation of this framework, including modular design, extensibility, scalability, the source of the biologically inspired spatial structure, encoding, classification, and visualisation tools must be considered. A Python version of this framework that conforms to these guidelines has been implemented.",
"title": ""
},
{
"docid": "c670bc911c468dce6a5c4ac83bf402b0",
"text": "There are some mobile-robot applications that require the complete coverage of an unstructured environment. Examples are humanitarian de-mining and floor-cleaning tasks. A complete-coverage algorithm is then used, a path-planning technique that allows the robot to pass over all points in the environment, avoiding unknown obstacles. Different coverage algorithms exist, but they fail working in unstructured environments. This paper details a complete-coverage algorithm for unstructured environments based on sensor information. Simulation results using a mobile robot validate the proposed approach.",
"title": ""
},
{
"docid": "1ee1adcfd73e9685eab4e2abd28183c7",
"text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.",
"title": ""
},
{
"docid": "45f100c3a1c7a990a86dac6480b5ff89",
"text": "Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-97% on an average across all subjects.",
"title": ""
},
{
"docid": "161fab4195de0d0358de9bd74f3c0805",
"text": "Working with sensitive data is often a balancing act between privacy and integrity concerns. Consider, for instance, a medical researcher who has analyzed a patient database to judge the effectiveness of a new treatment and would now like to publish her findings. On the one hand, the patients may be concerned that the researcher's results contain too much information and accidentally leak some private fact about themselves; on the other hand, the readers of the published study may be concerned that the results contain too little information, limiting their ability to detect errors in the calculations or flaws in the methodology.\n This paper presents VerDP, a system for private data analysis that provides both strong integrity and strong differential privacy guarantees. VerDP accepts queries that are written in a special query language, and it processes them only if a) it can certify them as differentially private, and if b) it can prove the integrity of the result in zero knowledge. Our experimental evaluation shows that VerDP can successfully process several different queries from the differential privacy literature, and that the cost of generating and verifying the proofs is practical: for example, a histogram query over a 63,488-entry data set resulted in a 20 kB proof that took 32 EC2 instances less than two hours to generate, and that could be verified on a single machine in about one second.",
"title": ""
},
{
"docid": "3a1d66cdc06338857fc685a2bdc8b068",
"text": "UNLABELLED\nThe WARM study is a longitudinal cohort study following infants of mothers with schizophrenia, bipolar disorder, depression and control from pregnancy to infant 1 year of age.\n\n\nBACKGROUND\nChildren of parents diagnosed with complex mental health problems including schizophrenia, bipolar disorder and depression, are at increased risk of developing mental health problems compared to the general population. Little is known regarding the early developmental trajectories of infants who are at ultra-high risk and in particular the balance of risk and protective factors expressed in the quality of early caregiver-interaction.\n\n\nMETHODS/DESIGN\nWe are establishing a cohort of pregnant women with a lifetime diagnosis of schizophrenia, bipolar disorder, major depressive disorder and a non-psychiatric control group. Factors in the parents, the infant and the social environment will be evaluated at 1, 4, 16 and 52 weeks in terms of evolution of very early indicators of developmental risk and resilience focusing on three possible environmental transmission mechanisms: stress, maternal caregiver representation, and caregiver-infant interaction.\n\n\nDISCUSSION\nThe study will provide data on very early risk developmental status and associated psychosocial risk factors, which will be important for developing targeted preventive interventions for infants of parents with severe mental disorder.\n\n\nTRIAL REGISTRATION\nNCT02306551, date of registration November 12, 2014.",
"title": ""
},
{
"docid": "d9224bda0061d4a266aa961f61ef957e",
"text": "Exploratory search activities tend to span multiple sessions and involve finding, analyzing and evaluating information found through many queries. Typical search systems, on the other hand, are designed to support single query, precision-oriented search tasks. We describe a search interface and system design of a multi-session exploratory search system, discuss design challenges encountered, and chronicle the evolution of our design. Our design describes novel displays for visualizing retrieval history information, and introduces ambient displays and persuasive elements to interactive information retrieval.",
"title": ""
},
{
"docid": "b17d66ba94b2d31dccaa2f29cb57f9c6",
"text": "Finding information on a large web site can be a difficult and time-consuming process. Recommender systems can help users find information by providing them with personalized suggestions. In this paper the recommender system PRES is described that uses content-based filtering techniques to suggest small articles about home improvements. A domain such as this implicates that the user model has to be very dynamic and learned from positive feedback only. The relevance feedback method seems to be a good candidate for learning such a user model, as it is both efficient and dynamic.",
"title": ""
},
{
"docid": "779e169d273fd34e15baba72c9c9ca2d",
"text": "This paper proposes an optimization-based model for generic document summarization. The model generates a summary by extracting salient sentences from documents. This approach uses the sentence-to-document collection, the summary-to-document collection and the sentence-to-sentence relations to select salient sentences from given document collection and reduce redundancy in the summary. To solve the optimization problem has been created an improved differential evolution algorithm. The algorithm can adjust crossover rate adaptively according to the fitness of individuals. We implemented the proposed model on multi-document summarization task. Experiments have been performed on DUC2002 and DUC2004 data sets. The experimental results provide strong evidence that the proposed optimization-based approach is a viable method for document summarization. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d89e4f53616af22db9c7364f217ff46c",
"text": "We propose an automatic method for measuring content-based music similarity, enhancing the current generation of music search engines and recommended systems. Many previous approaches to track similarity require brute-force, pair-wise processing between all audio features in a database and therefore are not practical for large collections. However, in an Internet-connected world, where users have access to millions of musical tracks, efficiency is crucial. Our approach uses features extracted from unlabeled audio data and near-neigbor retrieval using a distance threshold, determined by analysis, to solve a range of retrieval tasks. The tasks require temporal features-analogous to the technique of shingling used for text retrieval. To measure similarity, we count pairs of audio shingles, between a query and target track, that are below a distance threshold. The distribution of between-shingle distances is different for each database; therefore, we present an analysis of the distribution of minimum distances between shingles and a method for estimating a distance threshold for optimal retrieval performance. The method is compatible with locality-sensitive hashing (LSH)-allowing implementation with retrieval times several orders of magnitude faster than those using exhaustive distance computations. We evaluate the performance of our proposed method on three contrasting music similarity tasks: retrieval of mis-attributed recordings (fingerprint), retrieval of the same work performed by different artists (cover songs), and retrieval of edited and sampled versions of a query track by remix artists (remixes). Our method achieves near-perfect performance in the first two tasks and 75% precision at 70% recall in the third task. Each task was performed on a test database comprising 4.5 million audio shingles.",
"title": ""
},
{
"docid": "5b7483a4dea12d8b07921c150ccc66ee",
"text": "OBJECTIVE\nWe reviewed the efficacy of occupational therapy-related interventions for adults with rheumatoid arthritis.\n\n\nMETHOD\nWe examined 51 Level I studies (19 physical activity, 32 psychoeducational) published 2000-2014 and identified from five databases. Interventions that focused solely on the upper or lower extremities were not included.\n\n\nRESULTS\nFindings related to key outcomes (activities of daily living, ability, pain, fatigue, depression, self-efficacy, disease symptoms) are presented. Strong evidence supports the use of aerobic exercise, resistive exercise, and aquatic therapy. Mixed to limited evidence supports dynamic exercise, Tai Chi, and yoga. Among the psychoeducation interventions, strong evidence supports the use of patient education, self-management, cognitive-behavioral approaches, multidisciplinary approaches, and joint protection, and limited or mixed evidence supports the use of assistive technology and emotional disclosure.\n\n\nCONCLUSION\nThe evidence supports interventions within the scope of occupational therapy practice for rheumatoid arthritis, but few interventions were occupation based.",
"title": ""
},
{
"docid": "7ec6540b44b23a0380dcb848239ccac4",
"text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.",
"title": ""
},
{
"docid": "679ef5858effa501192587dda21ad69c",
"text": "When travelers plan trips, landmark recommendation systems that consider the trip properties will conveniently aid travelers in determining the locations they will visit. Because interesting locations may vary based on the traveler and the situation, it is important to personalize the landmark recommendations by considering the traveler and the trip. In this paper, we propose an approach that adaptively recommends clusters of landmarks using geo-tagged social media.We first examine the impact of a trip’s spatial and temporal properties on the distribution of popular places through large-scale data analyses. In our approach, we compute the significance of landmarks for travelers based on their trip’s spatial and temporal properties. Next, we generate clusters of landmark recommendations, which have similar themes or are contiguous, using travel trajectory histories. Landmark recommendation performances based on our approach are evaluated against several baseline approaches. Our approach results in increased accuracy and satisfaction compared with the baseline approaches. Through a user study, we also verify that our approach is applicable to lesser-known places and reflects local events as well as seasonal changes. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ec905fd77dee3b5fbf24b7e73905bfb8",
"text": "The effects of exposure to violent video games on automatic associations with the self were investigated in a sample of 121 students. Playing the violent video game Doom led participants to associate themselves with aggressive traits and actions on the Implicit Association Test. In addition, self-reported prior exposure to violent video games predicted automatic aggressive self-concept, above and beyond self-reported aggression. Results suggest that playing violent video games can lead to the automatic learning of aggressive self-views.",
"title": ""
},
{
"docid": "f9afcc134abda1c919cf528cbc975b46",
"text": "Multimodal question answering in the cultural heritage domain allows visitors to museums, landmarks or other sites to ask questions in a more natural way. This in turn provides better user experiences. In this paper, we propose the construction of a golden standard dataset dedicated to aiding research into multimodal question answering in the cultural heritage domain. The dataset, soon to be released to the public, contains multimodal content about the fascinating old-Egyptian Amarna period, including images of typical artworks, documents about these artworks (containing images) and over 800 multimodal queries integrating visual and textual questions. The multimodal questions and related documents are all in English. The multimodal questions are linked to relevant paragraphs in the related documents that contain the answer to the multimodal query.",
"title": ""
},
{
"docid": "8b6d3b5fb8af809619119ee0f75cb3c6",
"text": "This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "95a3cc864c5f63b87df9c216856dbdb8",
"text": "Web Content Management Systems (WCMS) play an increasingly important role in the Internet’s evolution. They are software platforms that facilitate the implementation of a web site or an e-commerce and are gaining popularity due to its flexibility and ease of use. In this work, we explain from a tutorial perspective how to manage WCMS and what can be achieved by using them. With this aim, we select the most popular open-source WCMS; namely, Joomla!, WordPress, and Drupal. Then, we implement three websites that are equal in terms of requirements, visual aspect, and functionality, one for each WCMS. Through a qualitative comparative analysis, we show the advantages and drawbacks of each solution, and the complexity associated. On the other hand, security concerns can arise if WCMS are not appropriately used. Due to the key position that they occupy in today’s Internet, we perform a basic security analysis of the three implement websites in the second part of this work. Specifically, we explain vulnerabilities, security enhancements, which errors should not be done, and which WCMS is initially safer.",
"title": ""
},
{
"docid": "4e08aba1ff8d0a5d0d23763dad627cb8",
"text": "ion Real systems are di cult to specify and verify without abstrac tions We need to identify di erent kinds of abstractions perhaps tailored for certain kinds of systems or problem domains and we need to develop ways to justify them formally perhaps using mechanical help Reusable models and theories Rather than de ning models and theories from scratch each time a new application is tackled it would be better to have reusable and parameterized models and theories Combinations of mathematical theories Many safety critical systems have both digital and analog components These hybrid systems require reasoning about both discrete and continuous mathematics System developers would like to be able to predict how well their system will operate in the eld Indeed they often care more about performance than cor rectness Performance modeling borrows strongly from probability statistics and queueing theory Data structures and algorithms To handle larger search spaces and larger systems new data structures and algorithms e g more concise data structures for representing boolean functions are needed",
"title": ""
},
{
"docid": "fcdb662faf1bba425967381f8111be7a",
"text": "This paper reports a W-band solid-state power amplifier with an output power of 5.2W at 95 GHz and greater than 3 watts over the 94 to 98.5 GHz band. These SOA results were achieved by combining 12 GaN MMICs in a low-loss radial-line combiner network. The 12-way combiner demonstrates an overall combining efficiency of 87.5%, and excluding combiner conductor losses (0.3 dB), exhibits a combining efficiency of 93.7% at 95GHz. The size of the 12-way amplifier/combiner is only 2.39″ dia. × 1.5″ length. This work represents the first application of the radial-line combiner configuration to applications at W-band frequencies, and establishes new levels of performance for solid-state power amplifiers operating at these frequencies.",
"title": ""
}
] |
scidocsrr
|
2a20d0506afe23b957eba9c9255c9d6b
|
SVM Based Decision Support System for Heart Disease Classification with Integer-Coded Genetic Algorithm to Select Critical Features
|
[
{
"docid": "c688d24fd8362a16a19f830260386775",
"text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.",
"title": ""
},
{
"docid": "1a1268ef30c225740b35ac123650ceb0",
"text": "Support Vector Machines, one of the new techniques for pattern classification, have been widely used in many application areas. The kernel parameters setting for SVM in a training process impacts on the classification accuracy. Feature selection is another factor that impacts classification accuracy. The objective of this research is to simultaneously optimize the parameters and feature subset without degrading the SVM classification accuracy. We present a genetic algorithm approach for feature selection and parameters optimization to solve this kind of problem. We tried several real-world datasets using the proposed GA-based approach and the Grid algorithm, a traditional method of performing parameters searching. Compared with the Grid algorithm, our proposed GA-based approach significantly improves the classification accuracy and has fewer input features for support vector machines. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "95dbebf3ed125e2a4f0d901f42f09be3",
"text": "Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ~ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.",
"title": ""
},
{
"docid": "8621fff78e92e1e0e9ba898d5e2433ca",
"text": "This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.",
"title": ""
},
{
"docid": "f03e2e50acb9650099c15cdd88f525d9",
"text": "Social network research has begun to take advantage of finegrained communications regarding coordination, decisionmaking, and knowledge sharing. These studies, however, have not generally analyzed how external events are associated with a social network’s structure and communicative properties. Here, we study how external events are associated with a network’s change in structure and communications. Analyzing a complete dataset of millions of instant messages among the decision-makers in a large hedge fund and their network of outside contacts, we investigate the link between price shocks, network structure, and change in the affect and cognition of decision-makers embedded in the network. When price shocks occur the communication network tends not to display structural changes associated with adaptiveness. Rather, the network “turtles up”. It displays a propensity for higher clustering, strong tie interaction, and an intensification of insider vs. outsider communication. Further, we find changes in network structure predict shifts in cognitive and affective processes, execution of new transactions, and local optimality of transactions better than prices, revealing the important predictive relationship between network structure and collective behavior within a social network.",
"title": ""
},
{
"docid": "7a2d4032d79659a70ed2f8a6b75c4e71",
"text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.",
"title": ""
},
{
"docid": "6c88c8723d54262ae5839302bd3ded5a",
"text": "This paper surveys the reduced common-mode voltage pulsewidth modulation (RCMV-PWM) methods for three-phase voltage-source inverters, investigates their performance characteristics, and provides a comparison with the standard PWM methods. PWM methods are reviewed, and their pulse patterns and common-mode voltage (CMV) patterns are illustrated. The inverter input and output current ripple characteristics and output voltage linearity characteristics of each PWM method are thoroughly investigated by analytical methods, simulations, and experiments. The research results illustrate the advantages and disadvantages of the considered methods, and suggest the utilization of the near-state PWM and active zero state PWM1 methods as overall superior methods. The paper aids in the selection and application of appropriate PWM methods in inverter drives with low CMV requirements.",
"title": ""
},
{
"docid": "a55dd930b34c0d7fce69d8e7f108dfa7",
"text": "EduSummIT 2013 featured a working group that examined digital citizenship within a global context. Group members recognized that, given today’s international, regional, political, and social dynamics, the notion of “global” might be more aspirational than practical. The development of informed policies and practices serving and involving as many sectors of society as possible is desirable since a growing world’s population, including students in classrooms, will have continued access to the Internet, mobile devices and social media. Action steps to guide technology integration into educational settings must address the following factors: national and local policies, bandwidth and technology infrastructure, educational contexts, cyber-safety and cyberwellness practices and privacy accountability. Finally, in the process of developing and implementing positive and productive solutions, as many key members and stakeholders as possible who share in—and benefit from—students’ digital lives should be involved, from families and educators to law enforcement authorities, from telecommunication organizations to local, provincial and national leaders.",
"title": ""
},
{
"docid": "0347347608738b966ca4a62dfb37fdd7",
"text": "Much of the work done in the field of tangible interaction has focused on creating tools for learning; however, in many cases, little evidence has been provided that tangible interfaces offer educational benefits compared to more conventional interaction techniques. In this paper, we present a study comparing the use of a tangible and a graphical interface as part of an interactive computer programming and robotics exhibit that we designed for the Boston Museum of Science. In this study, we have collected observations of 260 museum visitors and conducted interviews with 13 family groups. Our results show that visitors found the tangible and the graphical systems equally easy to understand. However, with the tangible interface, visitors were significantly more likely to try the exhibit and significantly more likely to actively participate in groups. In turn, we show that regardless of the condition, involving multiple active participants leads to significantly longer interaction times. Finally, we examine the role of children and adults in each condition and present evidence that children are more actively involved in the tangible condition, an effect that seems to be especially strong for girls.",
"title": ""
},
{
"docid": "6fe413cf75a694217c30a9ef79fab589",
"text": "Zusammenfassung) Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of “Biometric Cryptosystems”. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called “Weak-identity Privacy” and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving crypto-",
"title": ""
},
{
"docid": "ccafd3340850c5c1a4dfbedd411f1d62",
"text": "The paper predicts changes in global and regional incidences of armed conflict for the 2010–2050 period. The predictions are based on a dynamic multinomial logit model estimation on a 1970–2009 cross-sectional dataset of changes between no armed conflict, minor conflict, and major conflict. Core exogenous predictors are population size, infant mortality rates, demographic composition, education levels, oil dependence, ethnic cleavages, and neighborhood characteristics. Predictions are obtained through simulating the behavior of the conflict variable implied by the estimates from this model. We use projections for the 2011–2050 period for the predictors from the UN World Population Prospects and the International Institute for Applied Systems Analysis. We treat conflicts, recent conflict history, and neighboring conflicts as endogenous variables. Out-of-sample validation of predictions for 2007–2009 (based on estimates for the 1970–2000 period) indicates that the model predicts well, with an AUC of 0.937. Using a p > 0.30 threshold for positive prediction, the True Positive Rate 7–9 years into the future is 0.79 and the False Positive Rate 0.085. We predict a continued decline in the proportion of the world’s countries that have internal armed conflict, from about 15% in 2009 to 7% in 2050. The decline is particularly strong in the Western Asia and North Africa region, and less clear in Africa South of Sahara. The remaining conflict countries will increasingly be concentrated in East, Central, and Southern Africa and in East and South Asia. ∗An earlier version of this paper was presented to the ISA Annual Convention 2009, New York, 15–18 Feb. The research was funded by the Norwegian Research Council grant no. 163115/V10. Thanks to Ken Benoit, Mike Colaresi, Scott Gates, Nils Petter Gleditsch, Joe Hewitt, Bjørn Høyland, Andy Mack, Näıma Mouhleb, Gerald Schneider, and Phil Schrodt for valuable comments.",
"title": ""
},
{
"docid": "158cdd1c7740f30ec87e10a19171721b",
"text": "The current practice of physical diagnosis is dependent on physician skills and biases, inductive reasoning, and time efficiency. Although the clinical utility of echocardiography is well known, few data exist on how to integrate 2-dimensional screening \"quick-look\" ultrasound applications into a novel, modernized cardiac physical examination. We discuss the evidence basis behind ultrasound \"signs\" pertinent to the cardiovascular system and elemental in synthesis of bedside diagnoses and propose the application of a brief cardiac limited ultrasound examination based on these signs. An ultrasound-augmented cardiac physical examination can be taught in traditional medical education and has the potential to improve bedside diagnosis and patient care.",
"title": ""
},
{
"docid": "7aca3e7f9409fa1381a309d304eb898d",
"text": "The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.",
"title": ""
},
{
"docid": "8d8db8a8cf9dee121cb93e92577a03ea",
"text": "Nowadays, non-photorealistic rendering is an area in computer graphics that tries to simulate what artists do and the tools they use. Stippling illustrations with felt-tipped colour pen is not a commonly used technique by artists due to its complexity. In this paper we present a new method to simulate stippling illustrations with felt-tipped colour pen from a photograph or an image. This method infers a probability function with an expert system from some rules given by the artist and then simulates the behaviour of the artist when placing the dots on the illustration by means of a stochastic algorithm.",
"title": ""
},
{
"docid": "5474d000acf6c20708ed73b5a7e38a0b",
"text": "The primary objective of the research is to estimate the dependence between hair mercury content, hair selenium, mercury-to-selenium ratio, serum lipid spectrum, and gamma-glutamyl transferase (GGT) activity in 63 adults (40 men and 23 women). Serum triglyceride (TG) concentration in the high-mercury group significantly exceeded the values obtained for low- and medium-mercury groups by 72 and 42 %, respectively. Serum GGT activity in the examinees from high-Hg group significantly exceeded the values of the first and the second groups by 75 and 28 %, respectively. Statistical analysis of the male sample revealed similar dependences. Surprisingly, no significant changes in the parameters analyzed were detected in the female sample. In all analyzed samples, hair mercury was not associated with hair selenium concentrations. Significant correlation between hair mercury content and serum TG concentration (r = 0.531) and GGT activity (r = 0.524) in the general sample of the examinees was detected. The respective correlations were observed in the male sample. Hair mercury-to-selenium ratios significantly correlated with body weight (r = 0.310), body mass index (r = 0.250), serum TG (r = 0.389), atherogenic index (r = 0.257), and GGT activity (r = 0.393). The same correlations were observed in the male sample. Hg/Se ratio in women did not correlate with the analyzed parameters. Generally, the results of the current study show the following: (1) hair mercury is associated with serum TG concentration and GGT activity in men, (2) hair selenium content is not related to hair mercury concentration, and (3) mercury-to-selenium ratio correlates with lipid spectrum parameters and GGT activity.",
"title": ""
},
{
"docid": "ce3d81c74ef3918222ad7d2e2408bdb0",
"text": "This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.\nA key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.\nSection 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.",
"title": ""
},
{
"docid": "368a37e8247d8a6f446b31f1dc0f635e",
"text": "In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.",
"title": ""
},
{
"docid": "9a90164fb1f41bb36966487f86988f77",
"text": "Coordination is important in software development because it leads to benefi ts such as cost savings, shorter development cycles, and better-integrated products. Team cognition research suggests that members coordinate through team knowledge, but this perspective has only been investigated in real-time collocated tasks and we know little about which types of team knowledge best help coordination in the most geographically distributed software work. In this fi eld study, we investigate the coordination needs of software teams, how team knowledge affects coordination, and how this effect is infl uenced by geographic dispersion. Our fi ndings show that software teams have three distinct types of coordination needs—technical, temporal, and process—and that these needs vary with the members’ role; geographic distance has a negative effect on coordination, but is mitigated by shared knowledge of the team and presence awareness; and shared task knowledge is more important for coordination among collocated members. We articulate propositions for future research in this area based on our analysis.",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "0560c6e9f4de466cc5fcef9b1eba11ce",
"text": "Current methods for estimating force from tactile sensor signals are either inaccurate analytic models or taskspecific learned models. In this paper, we explore learning a robust model that maps tactile sensor signals to force. We specifically explore learning a mapping for the SynTouch BioTac sensor via neural networks. We propose a voxelized input feature layer for spatial signals and leverage information about the sensor surface to regularize the loss function. To learn a robust tactile force model that transfers across tasks, we generate ground truth data from three different sources: (1) the BioTac rigidly mounted to a force torque (FT) sensor, (2) a robot interacting with a ball rigidly attached to the same FT sensor, and (3) through force inference on a planar pushing task by formalizing the mechanics as a system of particles and optimizing over the object motion. A total of 140k samples were collected from the three sources. We achieve a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. Additionally, we evaluate the learned force model in a force feedback grasp controller performing object lifting and gentle placement. Our results can be found on https://sites.google.com/view/tactile-force. I. MOTIVATION & RELATED WORK Tactile perception is an important modality, enabling robots to gain critical information for safe interaction in the physical world [1–3]. The advent of sophisticated tactile sensors [4] with high fidelity signals allows for inferring varied information such as object identity and pose, surface texture, and slip between the object and robot [5–13]. However, using these sensors for force feedback control has been limited 1 NVIDIA, USA. 2 University of Utah Robotics Center and the School of Computing, University of Utah, Salt Lake City, UT, USA. bala@cs.utah.edu 3 Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, GA, USA. 4 University of Washington, Paul G. Allen School for Comupter Science & Engineering, Seattle, WA, USA to simple incremental controllers conditioned on detection of salient events (e.g., slip or contact) [10, 14] or learning taskspecific feedback policies on the tactile signals [15–17]. One limiting factor has been the inaccuracy of functions to map the tactile signals to force robustly across different tasks. Current methods for force estimation on the SynTouch BioTac [18] fail to cover the entire range of forces applied during typical manipulation tasks. Analytic methods [19, 20] tend to produce very noisy estimates at small force values and their accuracy decreases as the imparted force angle relative to the sensor surface normal becomes large (i.e., a large shear component relative to the compression force). On the other hand, learned force models [21, 22] tend to overfit to the dataset used in training and have not been sufficiently validated in predicting force across varied tasks. More specifically, Wettel and Loeb [21] use machine learning techniques to estimate the force, contact location, and object curvature when a tactile sensor interacts with an object. Lin et al. [19] improve upon [21], formulating analytic functions for estimation of the contact point, force, and torque from the BioTac sensor readings. Navarro et al. [20] explore calibration of the force magnitude estimates by recording the DC pressure signal when the sensor is in contact with a force plate. They use these values in a linear least squares formulation to estimate the gain. While they can estimate the magnitude of force, they cannot estimate force direction. Su et al. [22] explore using feed-forward neural networks to learn a model that maps BioTac signals to force estimates. The neural network more accurately estimates forces than the linear model from [19] and is used to perform grasp stabilization. Importantly, none of these methods validate their force estimates using a data source different from the method used to generate the training data. They also lack experimental comparison between different approaches in the context of robotic manipulation tasks. In this paper, we attempt to address these shortcomings, by collecting a large scale ground truth dataset from different methods and by leveraging the sensor surface and spatial information in our proposed neural network architecture. For one of our collection methods, we infer force from the motion of an object on a planar surface, by formalizing the interaction as a system of particles, a deviation from the well-established velocity model for planar pushing [23] which does not reason about force magnitude. This scheme of force estimation allows us to obtain accurate small-scale forces (0.1-2N), enabling us to learn a precise force prediction model. Motivated by [24], we compare our proposed method with the current state-of-the-art methods for force estimation for the BioTac sensor. We specifically compare the analytic model from [19] and the best performing feed-forward neural network model from [22]. We compare both in terms of force estimation accuracy on our dataset and also empirical experiments on a robot manipulation task. To summarize, this paper makes the following contributions: 1) We provide a novel method to infer force from object motion on a planar surface by formalizing the mechanics as a system of particles and solving for the force in a least squares minimization problem, given the object motion and the point on the object where the force is imparted. 2) We introduce a novel 3D voxel grid, neural network encoding of tactile signals enabling the network to better leverage spatial relations in the signal. We further tailor our learning to the tactile sensor through the introduction of a novel loss function used in training that scales the loss as a function of the angular distance between the imparted force and the surface normal. 3) We collected a large-scale dataset for the BioTac sensor, consisting of over 600 pushing episodes and 200 interactions between an arm-hand system equipped with the BioTac sensors and a force torque sensor. We validate these contributions on our dataset and in an autonomous pick and place task. We show that our proposed method robustly learns a model to estimate forces from the BioTac tactile signals that generalize across multiple robot tasks. Our method improves upon the state of the art [19, 22] in tactile force estimation for the BioTac sensor achieving a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. II. PROBLEM DEFINITION & PROPOSED APPROACH We describe the sensor’s states in the following section, followed by a formal definition of the problem. We then describe the computation of ground truth force from planar pushing in Sec. II-C and our network architecture in Sec. II-D.",
"title": ""
},
{
"docid": "5da2747dd2c3fe5263d8bfba6e23de1f",
"text": "We propose to transfer the content of a text written in a certain style to an alternative text written in a different style, while maintaining as much as possible of the original meaning. Our work is inspired by recent progress of applying style transfer to images, as well as attempts to replicate the results to text. Our model is a deep neural network based on Generative Adversarial Networks (GAN). Our novelty is replacing the discrete next-word prediction with prediction in the embedding space, which provides two benefits (1) train the GAN without using gradient approximations and (2) provide semantically related results even for failure cases.",
"title": ""
},
{
"docid": "88d2fd675e5d0a53ff0834505a438164",
"text": "BACKGROUND\nMany healthcare organizations have implemented adverse event reporting systems in the hope of learning from experience to prevent adverse events and medical errors. However, a number of these applications have failed or not been implemented as predicted.\n\n\nOBJECTIVE\nThis study presents an extended technology acceptance model that integrates variables connoting trust and management support into the model to investigate what determines acceptance of adverse event reporting systems by healthcare professionals.\n\n\nMETHOD\nThe proposed model was empirically tested using data collected from a survey in the hospital environment. A confirmatory factor analysis was performed to examine the reliability and validity of the measurement model, and a structural equation modeling technique was used to evaluate the causal model.\n\n\nRESULTS\nThe results indicated that perceived usefulness, perceived ease of use, subjective norm, and trust had a significant effect on a professional's intention to use an adverse event reporting system. Among them, subjective norm had the most contribution (total effect). Perceived ease of use and subjective norm also had a direct effect on perceived usefulness and trust, respectively. Management support had a direct effect on perceived usefulness, perceived ease of use, and subjective norm.\n\n\nCONCLUSION\nThe proposed model provides a means to understand what factors determine the behavioral intention of healthcare professionals to use an adverse event reporting system and how this may affect future use. In addition, understanding the factors contributing to behavioral intent may potentially be used in advance of system development to predict reporting systems acceptance.",
"title": ""
}
] |
scidocsrr
|
20a77d955a7015fd6a195968a0e8bfa9
|
The effect of egocentric body movements on users' navigation performance and spatial memory in zoomable user interfaces
|
[
{
"docid": "2b9733f936f39d0bb06b8f89a95f31e4",
"text": "In order to improve the three-dimensional (3D) exploration of virtual spaces above a tabletop, we developed a set of navigation techniques using a handheld magic lens. These techniques allow for an intuitive interaction with two-dimensional and 3D information spaces, for which we contribute a classification into volumetric, layered, zoomable, and temporal spaces. The proposed PaperLens system uses a tracked sheet of paper to navigate these spaces with regard to the Z-dimension (height above the tabletop). A formative user study provided valuable feedback for the improvement of the PaperLens system with respect to layer interaction and navigation. In particular, the problem of keeping the focus on selected layers was addressed. We also propose additional vertical displays in order to provide further contextual clues.",
"title": ""
}
] |
[
{
"docid": "fe407f4983ef6cc2e257d63a173c8487",
"text": "We present a semantically rich graph representation for indoor robotic navigation. Our graph representation encodes: semantic locations such as offices or corridors as nodes, and navigational behaviors such as enter office or cross a corridor as edges. In particular, our navigational behaviors operate directly from visual inputs to produce motor controls and are implemented with deep learning architectures. This enables the robot to avoid explicit computation of its precise location or the geometry of the environment, and enables navigation at a higher level of semantic abstraction. We evaluate the effectiveness of our representation by simulating navigation tasks in a large number of virtual environments. Our results show that using a simple sets of perceptual and navigational behaviors, the proposed approach can successfully guide the way of the robot as it completes navigational missions such as going to a specific office. Furthermore, our implementation shows to be effective to control the selection and switching of behaviors.",
"title": ""
},
{
"docid": "b78f1e6a5e93c1ad394b1cade293829f",
"text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing",
"title": ""
},
{
"docid": "218ca177bf3a5b78482b2064608505fc",
"text": "Wideband dual-polarization performance is desired for low-noise receivers and radiom eters at cent imete r and m illimeter wavelengths. The use of a waveguide orthomode transducer (OMT) can increase spectral coverage and sensitivity while reducing exit aperture size, optical spill, and instrumental polarization offsets. For these reasons, an orthomode junction is favored over a traditional quasi-op tical wire grid for focal plane imaging arrays from a systems perspective. The fabrication and pe rformance o f wideban d symm etric Bøifot OM T junctions at K -, Ka-, Q-, and W-bands are described. Typical WR10.0 units have an insertion loss of <0.2 dB , return loss ~20dB, and >40dB isolation over a >75-to-110 GHz band. The OMT operates with reduced ohmic losses at cryogenic temperatures.",
"title": ""
},
{
"docid": "51030b1a05af38096a6ba72660f8bdf2",
"text": "As a new type of e-commerce, social commerce is an emerging marketing form in which business is conducted via social networking platforms. It is playing an increasingly important role in influencing consumers’ purchase intentions. Social commerce uses friendships on social networking platforms, such as Facebook and Twitter, as the vehicle for social sharing about products or sellers to induce interest in a product, thereby increasing the purchase intention. In this paper, we develop and validate a conceptual model of how social factors, such as social support, seller uncertainty, and product uncertainty, influence onsumer purchasing intentions ocial support eller uncertainty roduct uncertainty hird-party infomediaries users’ purchasing behaviors in social commerce. This study aims to provide an understanding of the relationship between user behavior and social factors on social networking platforms. Using the largest social networking website in China, renren.com, this study finds that social support, seller uncertainty, and product uncertainty affect user behaviors. The results further show that social factors can significantly enhance users’ purchase intentions in social shopping. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f635d570b827d68e057afcaadca791c",
"text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
},
{
"docid": "b1a08b10ea79a250a62030a2987b67a6",
"text": "Most text mining tasks, including clustering and topic detection, are based on statistical methods that treat text as bags of words. Semantics in the text is largely ignored in the mining process, and mining results often have low interpretability. One particular challenge faced by such approaches lies in short text understanding, as short texts lack enough content from which statistical conclusions can be drawn easily. In this paper, we improve text understanding by using a probabilistic knowledgebase that is as rich as our mental world in terms of the concepts (of worldly facts) it contains. We then develop a Bayesian inference mechanism to conceptualize words and short text. We conducted comprehensive experiments on conceptualizing textual terms, and clustering short pieces of text such as Twitter messages. Compared to purely statistical methods such as latent semantic topic modeling or methods that use existing knowledgebases (e.g., WordNet, Freebase and Wikipedia), our approach brings significant improvements in short text understanding as reflected by the clustering accuracy.",
"title": ""
},
{
"docid": "472f1b7f3ebf1d8af950d9d348cafc98",
"text": "We analyze convergence of GANs through the lens of online learning and game theory, to understand what makes it hard to achieve consistent stable training in practice. We identify that the underlying game here can be ill-posed and poorly conditioned, and propose a simple regularization scheme based on local perturbations of the input data to address these issues. Currently, the methods that improve stability either impose additional computational costs or require the usage of specific architectures/modeling objectives. Further, we show that WGAN-GP, which is the state-of-the-art stable training procedure, is similar to LS-GAN, does not follow from KR-duality and can be too restrictive in general. In contrast, our proposed algorithm is fast, simple to implement and achieves competitive performance in a stable fashion across a variety of architectures and objective functions with minimal hyperparameter tuning. We show significant improvements over WGAN-GP across these conditions.",
"title": ""
},
{
"docid": "c632d3bfb27987e74cc69865627388bf",
"text": "Previous studies and surgeon interviews have shown that most surgeons prefer quality standard de nition (SD)TV 2D scopes to rst generation 3D endoscopes. The use of a telesurgical system has eased many of the design constraints on traditional endoscopes, enabling the design of a high quality SDTV 3D endoscope and an HDTV endoscopic system with outstanding resolution. The purpose of this study was to examine surgeon performance and preference given the choice between these. The study involved two perceptual tasks and four visual-motor tasks using a telesurgical system using the 2D HDTV endoscope and the SDTV endoscope in both 2D and 3D mode. The use of a telesurgical system enabled recording of all the subjects motions for later analysis. Contrary to experience with early 3D scopes and SDTV 2D scopes, this study showed that despite the superior resolution of the HDTV system surgeons performed better with and preferred the SDTV 3D scope.",
"title": ""
},
{
"docid": "4054713a00a9a2af6eb65f56433a943e",
"text": "The question why deep learning algorithms perform so well in practice has attracted increasing research interest. However, most of well-established approaches, such as hypothesis capacity, robustness or sparseness, have not provided complete explanations, due to the high complexity of the deep learning algorithms and their inherent randomness. In this work, we introduce a new approach – ensemble robustness – towards characterizing the generalization performance of generic deep learning algorithms. Ensemble robustness concerns robustness of the population of the hypotheses that may be output by a learning algorithm. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbation is bounded in average, or equivalently, the performance variance of the algorithm is small. Quantifying ensemble robustness of various deep learning algorithms may be difficult analytically. However, extensive simulations for seven common deep learning algorithms for different network architectures provide supporting evidence for our claims. Furthermore, our work explains the good performance of several published deep learning algorithms.",
"title": ""
},
{
"docid": "d06dc916942498014f9d00498c1d1d1f",
"text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦",
"title": ""
},
{
"docid": "988c161ceae388f5dbcdcc575a9fa465",
"text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.",
"title": ""
},
{
"docid": "2efb71ffb35bd05c7a124ffe8ad8e684",
"text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.",
"title": ""
},
{
"docid": "a89c53f4fbe47e7a5e49193f0786cd6d",
"text": "Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children.",
"title": ""
},
{
"docid": "87d15c47894210ad306948f32122a2c4",
"text": "We design and implement MobileInsight, a software tool that collects, analyzes and exploits runtime network information from operational cellular networks. MobileInsight runs on commercial off-the-shelf phones without extra hardware or additional support from operators. It exposes protocol messages on both control plane and (below IP) data plane from the 3G/4G chipset. It provides in-device protocol analysis and operation logic inference. It further offers a simple API, through which developers and researchers obtain access to low-level network information for their mobile applications. We have built three showcases to illustrate how MobileInsight is applied to cellular network research.",
"title": ""
},
{
"docid": "3716c221969ca93dac889820498d8dd4",
"text": "Affective Loop Experiences What Are They? p. 1 Fine Processing p. 13 Mass Interpersonal Persuasion: An Early View of a New Phenomenon p. 23 Social Network Systems Online Persuasion in Facebook and Mixi: A Cross-Cultural Comparison p. 35 Website Credibility, Active Trust and Behavioural Intent p. 47 Network Awareness, Social Context and Persuasion p. 58 Knowledge Management Persuasion in Knowledge-Based Recommendation p. 71 Persuasive Technology Design A Rhetorical Approach p. 83 Benevolence and Effectiveness: Persuasive Technology's Spillover Effects in Retail Settings p. 94",
"title": ""
},
{
"docid": "13ae30bc5bcb0714fe752fbe9c7e5de8",
"text": "The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.",
"title": ""
},
{
"docid": "f489708f15f3e5cdd15f669fb9979488",
"text": "Humans learn to play video games significantly faster than state-of-the-art reinforcement learning (RL) algorithms. Inspired by this, we introduce strategic object oriented reinforcement learning (SOORL) to learn simple dynamics model through automatic model selection and perform efficient planning with strategic exploration. We compare different exploration strategies in a model-based setting in which exact planning is impossible. Additionally, we test our approach on perhaps the hardest Atari game Pitfall! and achieve significantly improved exploration and performance over prior methods.",
"title": ""
},
{
"docid": "54d223a2a00cbda71ddf3f1b29f1ebed",
"text": "Much of the data of scientific interest, particularly when independence of data is not assumed, can be represented in the form of information networks where data nodes are joined together to form edges corresponding to some kind of associations or relationships. Such information networks abound, like protein interactions in biology, web page hyperlink connections in information retrieval on the Web, cellphone call graphs in telecommunication, co-authorships in bibliometrics, crime event connections in criminology, etc. All these networks, also known as social networks, share a common property, the formation of connected groups of information nodes, called community structures. These groups are densely connected nodes with sparse connections outside the group. Finding these communities is an important task for the discovery of underlying structures in social networks, and has recently attracted much attention in data mining research. In this paper, we present Top Leaders, a new community mining approach that, simply put, regards a community as a set of followers congregating around a potential leader. Our algorithm starts by identifying promising leaders in a given network then iteratively assembles followers to their closest leaders to form communities, and subsequently finds new leaders in each group around which to gather followers again until convergence. Our intuitions are based on proven observations in social networks and the results are very promising. Experimental results on benchmark networks verify the feasibility and effectiveness of our new community mining approach.",
"title": ""
},
{
"docid": "e8d102a7b00f81cefc4b1db043a041f8",
"text": "Microelectrode measurements can be used to investigate both the intracellular pools of ions and membrane transport processes of single living cells. Microelectrodes can report these processes in the surface layers of root and leaf cells of intact plants. By careful manipulation of the plant, a minimum of disruption is produced and therefore the information obtained from these measurements most probably represents the 'in vivo' situation. Microelectrodes can be used to assay for the activity of particular transport systems in the plasma membrane of cells. Compartmental concentrations of inorganic metabolite ions have been measured by several different methods and the results obtained for the cytosol are compared. Ion-selective microelectrodes have been used to measure the activities of ions in the apoplast, cytosol and vacuole of single cells. New sensors for these microelectrodes are being produced which offer lower detection limits and the opportunity to measure other previously unmeasured ions. Measurements can be used to determine the intracellular steady-state activities or report the response of cells to environmental changes.",
"title": ""
}
] |
scidocsrr
|
1c06f38f55e56a8bab53d57d5f7fd8bf
|
Gated Graph Sequence Neural Networks
|
[
{
"docid": "8d83568ca0c89b1a6e344341bb92c2d0",
"text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"title": ""
}
] |
[
{
"docid": "5238ae08b15854af54274e1c2b118d54",
"text": "One-dimensional fractional anomalous sub-diffusion equations on an unbounded domain are considered in our work. Beginning with the derivation of the exact artificial boundary conditions, the original problem on an unbounded domain is converted into mainly solving an initial-boundary value problem on a finite computational domain. The main contribution of our work, as compared with the previous work, lies in the reduction of fractional differential equations on an unbounded domain by using artificial boundary conditions and construction of the corresponding finite difference scheme with the help of method of order reduction. The difficulty is the treatment of Neumann condition on the artificial boundary, which involves the time-fractional derivative operator. The stability and convergence of the scheme are proven using the discrete energy method. Two numerical examples clarify the effectiveness and accuracy of the proposed method. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d3431bc21cde7bd96fe4c70d6ea6657a",
"text": "Chip-multiprocessors are quickly gaining momentum in all segments of computing. However, the practical success of CMPs strongly depends on addressing the difficulty of multithreaded application development. To address this challenge, it is necessary to co-develop new CMP architecture with novel programming models. Currently, architecture research relies on software simulators which are too slow to facilitate interesting experiments with CMP software without using small datasets or significantly reducing the level of detail in the simulated models. An alternative to simulation is to exploit the rich capabilities of modern FPGAs to create FPGA-based platforms for novel CMP research. This paper presents ATLAS, the first prototype for CMPs with hardware support for Transactional Memory (TM), a technology aiming to simplify parallel programming. ATLAS uses the BEE2 multi-FPGA board to provide a system with 8 PowerPC cores that run at 100MHz and runs Linux. ATLAS provides significant benefits for CMP research such as 100x performance improvement over a software simulator and good visibility that helps with software tuning and architectural improvements. In addition to presenting and evaluating ATLAS, we share our observations about building a FPGA-based framework for CMP research. Specifically, we address issues such as overall performance, challenges of mapping ASIC-style CMP RTL on to FPGAs, software support, the selection criteria for the base processor, and the challenges of using pre-designed IP libraries.",
"title": ""
},
{
"docid": "4608c8ca2cf58ca9388c25bb590a71df",
"text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.",
"title": ""
},
{
"docid": "e2d431708d34533f4390d17a21bc7373",
"text": "Credit Derivatives are continuing to enjoy major growth in the financial markets, aided and abetted by sophisticated product development and the expansion of product applications beyond price management to the strategic management of portfolio risk. As Blythe Masters, global head of credit derivatives marketing at J.P. Morgan in New York points out: \" In bypassing barriers between different classes, maturities, rating categories, debt seniority levels and so on, credit derivatives are creating enormous opportunities to exploit and profit from associated discontinuities in the pricing of credit risk \". With such intense and rapid product development Risk Publications is delighted to introduce the first Guide to Credit Derivatives, a joint project with J.P. Morgan, a pioneer in the use of credit derivatives, with contributions from the RiskMetrics Group, a leading provider of risk management research, data, software, and education. The guide will be of great value to risk managers addressing portfolio concentration risk, issuers seeking to minimise the cost of liquidity in the debt capital markets and investors pursuing assets that offer attractive relative value.",
"title": ""
},
{
"docid": "657614eba108bd1e58315299ac29ee7f",
"text": "In this research, an intelligent system is designed between the user and the database system which accepts natural language input and then converts it into an SQL query. The research focuses on incorporating complex queries along with simple queries irrespective of the database. The system accommodates aggregate functions, multiple conditions in WHERE clause, advanced clauses like ORDER BY, GROUP BY and HAVING. The system handles single sentence natural language inputs, which are with respect to selected database. The research currently concentrates on MySQL database system. The natural language statement goes through various stages of Natural Language Processing like morphological, lexical, syntactic and semantic analysis resulting in SQL query formation.",
"title": ""
},
{
"docid": "90469bbf7cf3216b2ab1ee8441fbce14",
"text": "This work presents the evolution of a solution for predictive maintenance to a Big Data environment. The proposed adaptation aims for predicting failures on wind turbines using a data-driven solution deployed in the cloud and which is composed by three main modules. (i) A predictive model generator which generates predictive models for each monitored wind turbine by means of Random Forest algorithm. (ii) A monitoring agent that makes predictions every 10 minutes about failures in wind turbines during the next hour. Finally, (iii) a dashboard where given predictions can be visualized. To implement the solution Apache Spark, Apache Kafka, Apache Mesos and HDFS have been used. Therefore, we have improved the previous work in terms of data process speed, scalability and automation. In addition, we have provided fault-tolerant functionality with a centralized access point from where the status of all the wind turbines of a company localized all over the world can be monitored, reducing O&M costs.",
"title": ""
},
{
"docid": "a3d32ccd0e461c3d47dbec0fb12398fa",
"text": "Ever increasing societal demands for uninterrupted work are causing unparalleled amounts of sleep deprivation among workers. Sleep deprivation has been linked to safety problems ranging from medical misdiagnosis to industrial and vehicular accidents. Microsleeps (very brief intrusions of sleep into wakefulness) are usually cited as the cause of the performance decrements during sleep deprivation. Changes in a more basic physiological phenomenon, attentional shift, were hypothesized to be additional factors in performance declines. The current study examined the effects of 36 hours of sleep deprivation on the electrodermal-orienting response (OR), a measure of attentional shift or capture. Subjects were 71 male undergraduate students, who were divided into sleep deprivation and control (non-sleep deprivation) groups. The expected negative effects of sleep deprivation on performance were noted in increased reaction times and increased variability in the sleep-deprived group on attention-demanding cognitive tasks. OR latency was found to be significantly delayed after sleep deprivation, OR amplitude was significantly decreased, and habituation of the OR was significantly faster during sleep deprivation. These findings indicate impaired attention, the first revealing slowed shift of attention to novel stimuli, the second indicating decreased attentional allocation to stimuli, and the third revealing more rapid loss of attention to repeated stimuli. These phenomena may be factors in the impaired cognitive performance seen during sleep deprivation.",
"title": ""
},
{
"docid": "864ab702d0b45235efe66cd9e3bc5e66",
"text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.",
"title": ""
},
{
"docid": "a32956703826761d16bba1a9665b215e",
"text": "Triangle meshes are widely used in representing surfaces in computer vision and computer graphics. Although 2D image processingbased edge detection techniques have been popular in many application areas, they are not well developed for surfaces represented by triangle meshes. This paper proposes a robust edge detection algorithm for triangle meshes and its applications to surface segmentation and adaptive surface smoothing. The proposed edge detection technique is based on eigen analysis of the surface normal vector field in a geodesic window. To compute the edge strength of a certain vertex, the neighboring vertices in a specified geodesic distance are involved. Edge information are used further to segment the surfaces with watershed algorithm and to achieve edgepreserved, adaptive surface smoothing. The proposed algorithm is novel in robustly detecting edges on triangle meshes against noise. The 3D watershed algorithm is an extension from previous work. Experimental results on surfaces reconstructed from multi-view real range images are presented.",
"title": ""
},
{
"docid": "b005d4a35c452d965b69d20a80c97d07",
"text": "User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations.",
"title": ""
},
{
"docid": "aa818a7e3e8be9dd46b836e6e507130a",
"text": "In this paper, we overview some Semantic Web technologies and describe the Music Ontology: a formal framework for dealing with music-related information on the Semantic Web, including editorial, cultural and acoustic information. We detail how this ontology can act as a grounding for more domain-specific knowledge representation. In addition, we describe current projects involving the Music Ontology and interlinked repositories of musicrelated knowledge.",
"title": ""
},
{
"docid": "7148408c07e6caee0b8f7cb1ff95443b",
"text": "Kefir is a fermented milk drink produced by the actions of bacteria and yeasts contained in kefir grains, and is reported to have a unique taste and unique properties. During fermentation, peptides and exopolysaccharides are formed that have been shown to have bioactive properties. Moreover, in vitro and animal trials have shown kefir and its constituents to have anticarcinogenic, antimutagenic, antiviral and antifungal properties. Although kefir has been produced and consumed in Eastern Europe for a long period of time, few clinical trials are found in the scientific literature to support the health claims attributed to kefir. The large number of microorganisms in kefir, the variety of possible bioactive compounds that could be formed during fermentation, and the long list of reputed benefits of eating kefir make this fermented dairy product a complex",
"title": ""
},
{
"docid": "4d57b0dbc36c2eb058285b4a5b6c102c",
"text": "OBJECTIVE\nThis study was planned to investigate the efficacy of neuromuscular rehabilitation and Johnstone Pressure Splints in the patients who had ataxic multiple sclerosis.\n\n\nMETHODS\nTwenty-six outpatients with multiple sclerosis were the subjects of the study. The control group (n = 13) was given neuromuscular rehabilitation, whereas the study group (n = 13) was treated with Johnstone Pressure Splints in addition.\n\n\nRESULTS\nIn pre- and posttreatment data, significant differences were found in sensation, anterior balance, gait parameters, and Expanded Disability Status Scale (p < 0.05). An important difference was observed in walking-on-two-lines data within the groups (p < 0.05). There also was a statistically significant difference in pendular movements and dysdiadakokinesia (p < 0.05). When the posttreatment values were compared, there was no significant difference between sensation, anterior balance, gait parameters, equilibrium and nonequilibrium coordination tests, Expanded Disability Status Scale, cortical onset latency, and central conduction time of somatosensory evoked potentials and motor evoked potentials (p > 0.05). Comparison of values revealed an important difference in cortical onset-P37 peak amplitude of somatosensory evoked potentials (right limbs) in favor of the study group (p < 0.05).\n\n\nCONCLUSIONS\nAccording to our study, it was determined that physiotherapy approaches were effective to decrease the ataxia. We conclude that the combination of suitable physiotherapy techniques is effective multiple sclerosis rehabilitation.",
"title": ""
},
{
"docid": "3911ba1ad7da27fb07a7198215113610",
"text": "In this article, a method of feedback data acquisition and target recognition using Kinect sensor is proposed, in order to perform position tracking and control of Robolink® articulated arm. Robolink is a lightweight flexible joint robotic manipulator, the rotational joints of which are driven by step motors through Dyneema wires. The goal of the presented experimental work was to investigate the possibility to use Robolink in fruit picking systems and other similar works. At an early stage, Lidar sensors were also used for Robolink indoor behavior experimentation. Feedback data management and control law, based on the kinematics of this lightweight flexible robotic arm are also analyzed and presented from the efficiency point of view.",
"title": ""
},
{
"docid": "0e68fbcd564e43df2b4e1866ab88e833",
"text": "This paper considers the decision-making problem for a human-driven vehicle crossing a road intersection in the presence of other, potentially errant, drivers. Our approach relies on a novel threat assessment module, which combines an intention predictor based on support vector machines with an efficient threat assessor using rapidly-exploring random trees. This module warns the host driver with the computed threat level and the corresponding best “escape maneuver” through the intersection, if the threat is sufficiently large. Through experimental results with small autonomous and human-driven vehicles, we demonstrate that this threat assessment module can be used in real-time to minimize the risk of collision.",
"title": ""
},
{
"docid": "fa34cdffb421f2c514d5bacbc6776ae9",
"text": "A review on various CMOS voltage level shifters is presented in this paper. A voltage level-shifter shifts the level of input voltage to desired output voltage. Voltage Level Shifter circuits are compared with respect to output voltage level, power consumption and delay. Systems often require voltage level translation devices to allow interfacing between integrated circuit devices built from different voltage technologies. The choice of the proper voltage level translation device depends on many factors and will affect the performance and efficiency of the circuit application.",
"title": ""
},
{
"docid": "7c54cef80d345cdb10f56ca440f5fad9",
"text": "SIR, Arndt–Gottron scleromyxoedema is a rare fibromucinous disorder regarded as a variant of the lichen myxoedematosus. The diagnostic criteria are a generalized papular and sclerodermoid eruption, a microscopic triad of mucin deposition, fibroblast proliferation and fibrosis, a monoclonal gammopathy (mostly IgG-k paraproteinaemia) and the absence of a thyroid disorder. This disease initially presents with sclerosis of the skin and clusters of small lichenoid papules with a predilection for the face, neck and the forearm. Progressively, the skin lesions can become more widespread and the induration of skin can result in a scleroderma-like condition with sclerodactyly and microstomia, reduced mobility and disability. Systemic involvement is common, e.g. upper gastrointestinal dysmotility, proximal myopathy, joint contractures, neurological complications such as psychic disturbances and encephalopathy, obstructive ⁄restrictive lung disease, as well as renal and cardiovascular involvement. Numerous treatment options have been described in the literature. These include corticosteroids, retinoids, thalidomide, extracorporeal photopheresis (ECP), psoralen plus ultraviolet A radiation, ciclosporin, cyclophosphamide, melphalan or autologous stem cell transplantation. In September 1999, a 48-year-old white female first noticed an erythematous induration with a lichenoid papular eruption on her forehead. Three months later the lesions became more widespread including her face (Fig. 1a), neck, shoulders, forearms (Fig. 2a) and legs. When the patient first presented in our department in June 2000, she had problems opening her mouth fully as well as clenching both hands or moving her wrist. The histological examination of the skin biopsy was highly characteristic of Arndt–Gottron scleromyxoedema. Full blood count, blood morphology, bone marrow biopsy, bone scintigraphy and thyroid function tests were normal. Serum immunoelectrophoresis revealed an IgG-k paraproteinaemia. Urinary Bence-Jones proteins were negative. No systemic involvement was disclosed. We initiated ECP therapy in August 2000, initially at 2-week intervals (later monthly) on two succeeding days. When there was no improvement after 3 months, we also administered cyclophosphamide (Endoxana ; Baxter Healthcare Ltd, Newbury, U.K.) at a daily dose of 100 mg with mesna 400 mg (Uromitexan ; Baxter) prophylaxis. The response to this therapy was rather moderate. In February 2003 the patient developed a change of personality and loss of orientation and was admitted to hospital. The extensive neurological, radiological and microbiological diagnostics were unremarkable at that time. A few hours later the patient had seizures and was put on artificial ventilation in an intensive care unit. The patient was comatose for several days. A repeated magnetic resonance imaging scan was still normal, but the cerebrospinal fluid tap showed a dysfunction of the blood–cerebrospinal fluid barrier. A bilateral loss of somatosensory evoked potentials was noticeable. The neurological symptoms were classified as a ‘dermatoneuro’ syndrome, a rare extracutaneous manifestation of scleromyxoedema. After initiation of treatment with methylprednisolone (Urbason ; Aventis, Frankfurt, Germany) the neurological situation normalized in the following 2 weeks. No further medical treatment was necessary. In April 2003 therapy options were re-evaluated and the patient was started and maintained on a 7-day course of melphalan 7.5 mg daily (Alkeran ; GlaxoSmithKline, Uxbridge, U.K.) in combination with prednisolone 40 mg daily (Decortin H ; Merck, Darmstadt, Germany) every 6 weeks. This treat(a)",
"title": ""
},
{
"docid": "1886f5d95b1db7c222bc23770835e2b7",
"text": "Signature files and inverted files are well-known index structures. In this paper we undertake a direct comparison of the two for searching for partially-specified queries in a large lexicon stored in main memory. Using n-grams to index lexicon terms, a bit-sliced signature file can be compressed to a smaller size than an inverted file if each n-gram sets only one bit in the term signature. With a signature width less than half the number of unique n-grams in the lexicon, the signature file method is about as fast as the inverted file method, and significantly smaller. Greater flexibility in memory usage and faster index generation time make signature files appropriate for searching large lexicons or other collections in an environment where memory is at a premium.",
"title": ""
},
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
},
{
"docid": "0226f16f4900bab76cf7ef71c9b55eb5",
"text": "The ability to anticipate the future is essential when making real time critical decisions, provides valuable information to understand dynamic natural scenes, and can help unsupervised video representation learning. State-of-art video prediction is based on complex architectures that need to learn large numbers of parameters, are potentially hard to train, slow to run, and may produce blurry predictions. In this paper, we introduce DYAN, a novel network with very few parameters and easy to train, which produces accurate, high quality frame predictions, faster than previous approaches. DYAN owes its good qualities to its encoder and decoder, which are designed following concepts from systems identification theory and exploit the dynamics-based invariants of the data. Extensive experiments using several standard video datasets show that DYAN is superior generating frames and that it generalizes well across domains.",
"title": ""
}
] |
scidocsrr
|
79a8b4f2ab81b9bd36b436e347f80ed7
|
Consumer perception of interface quality, security, and loyalty in electronic commerce
|
[
{
"docid": "b44600830a6aacd0a1b7ec199cba5859",
"text": "Existing e-service quality scales mainly focus on goal-oriented e-shopping behavior excluding hedonic quality aspects. As a consequence, these scales do not fully cover all aspects of consumer's quality evaluation. In order to integrate both utilitarian and hedonic e-service quality elements, we apply a transaction process model to electronic service encounters. Based on this general framework capturing all stages of the electronic service delivery process, we develop a transaction process-based scale for measuring service quality (eTransQual). After conducting exploratory and confirmatory factor analysis, we identify five discriminant quality dimensions: functionality/design, enjoyment, process, reliability and responsiveness. All extracted dimensions of eTransQual show a significant positive impact on important outcome variables like perceived value and customer satisfaction. Moreover, enjoyment is a dominant factor in influencing both relationship duration and repurchase intention as major drivers of customer lifetime value. As a result, we present conceptual and empirical evidence for the need to integrate both utilitarian and hedonic e-service quality elements into one measurement scale. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "3da6fadaf2363545dfd0cea87fe2b5da",
"text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "18e2a2c5c213ae1e0e73f0fca3243d55",
"text": "In the past 20 years we have learned a great deal about GABAA receptor (GABAAR) subtypes, and which behaviors are regulated or which drug effects are mediated by each subtype. However, the question of where GABAARs involved in specific drug effects and behaviors are located in the brain remains largely unanswered. We review here recent studies taking a circuit pharmacology approach to investigate the functions of GABAAR subtypes in specific brain circuits controlling fear, anxiety, learning, memory, reward, addiction, and stress-related behaviors. The findings of these studies highlight the complexity of brain inhibitory systems and the importance of taking a subtype-, circuit-, and neuronal population-specific approach to develop future therapeutic strategies using cell type-specific drug delivery.",
"title": ""
},
{
"docid": "80a5eaec904b8412cebfe17e392e448a",
"text": "Distributional semantic models learn vector representations of words through the contexts they occur in. Although the choice of context (which often takes the form of a sliding window) has a direct influence on the resulting embeddings, the exact role of this model component is still not fully understood. This paper presents a systematic analysis of context windows based on a set of four distinct hyperparameters. We train continuous SkipGram models on two English-language corpora for various combinations of these hyper-parameters, and evaluate them on both lexical similarity and analogy tasks. Notable experimental results are the positive impact of cross-sentential contexts and the surprisingly good performance of right-context windows.",
"title": ""
},
{
"docid": "98d3dddfca32c442f6b7c0a6da57e690",
"text": "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce β-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter β that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that β-VAE with appropriately tuned β > 1 qualitatively outperforms VAE (β = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, β-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter β, which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.",
"title": ""
},
{
"docid": "8dba7b19c15cbb04965ac483b7660ec9",
"text": "Deep Belief Networks (DBN) have been successful in classification especially image recognition tasks. However, the performance of a DBN is often highly dependent on settings in particular the combination of runtime parameter values. In this work, we propose a hyper-heuristic based framework which can optimise DBNs independent from the problem domain. It is the first time hyper-heuristic entering this domain. The framework iteratively selects suitable heuristics based on a heuristic set, apply the heuristic to tune the DBN to better fit with the current search space. Under this framework the setting of DBN learning is adaptive. Three well-known image reconstruction benchmark sets were used for evaluating the performance of this new approach. Our experimental results show this hyper-heuristic approach can achieve high accuracy under different scenarios on diverse image sets. In addition state-of-the-art meta-heuristic methods for tuning DBN were introduced for comparison. The results illustrate that our hyper-heuristic approach can obtain better performance on almost all test cases.",
"title": ""
},
{
"docid": "5b630705e4f90e1e845ff81df079cf13",
"text": "feature extraction for text classification Göksel BİRİCİK∗, Banu DİRİ, Ahmet Coşkun SÖNMEZ Department of Computer Engineering, Yıldız Technical University, Esenler, İstanbul-TURKEY e-mails: {goksel,banu,acsonmez}@ce.yildiz.edu.tr Received: 03.02.2011 Abstract Feature selection and extraction are frequently used solutions to overcome the curse of dimensionality in text classification problems. We introduce an extraction method that summarizes the features of the document samples, where the new features aggregate information about how much evidence there is in a document, for each class. We project the high dimensional features of documents onto a new feature space having dimensions equal to the number of classes in order to form the abstract features. We test our method on 7 different text classification algorithms, with different classifier design approaches. We examine performances of the classifiers applied on standard text categorization test collections and show the enhancements achieved by applying our extraction method. We compare the classification performance results of our method with popular and well-known feature selection and feature extraction schemes. Results show that our summarizing abstract feature extraction method encouragingly enhances classification performances on most of the classifiers when compared with other methods.Feature selection and extraction are frequently used solutions to overcome the curse of dimensionality in text classification problems. We introduce an extraction method that summarizes the features of the document samples, where the new features aggregate information about how much evidence there is in a document, for each class. We project the high dimensional features of documents onto a new feature space having dimensions equal to the number of classes in order to form the abstract features. We test our method on 7 different text classification algorithms, with different classifier design approaches. We examine performances of the classifiers applied on standard text categorization test collections and show the enhancements achieved by applying our extraction method. We compare the classification performance results of our method with popular and well-known feature selection and feature extraction schemes. Results show that our summarizing abstract feature extraction method encouragingly enhances classification performances on most of the classifiers when compared with other methods.",
"title": ""
},
{
"docid": "28d75588fdb4ff45929da124b001e8cc",
"text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch",
"title": ""
},
{
"docid": "1cdd599b49d9122077a480a75391aae8",
"text": "Two aspects of children's early gender development-the spontaneous production of gender labels and gender-typed play-were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children's gender labeling as based on mothers' biweekly telephone interviews regarding their children's language from 9 through 21 months. Videotapes of children's play both alone and with mother during home visits at 17 and 21 months were independently analyzed for play with gender-stereotyped and gender-neutral toys. Finally, the relation between gender labeling and gender-typed play was examined. Children transitioned to using gender labels at approximately 19 months, on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in gender-typed play, suggesting that knowledge of gender categories might influence gender typing before the age of 2.",
"title": ""
},
{
"docid": "0506a7f5dddf874487c90025dff0bc7d",
"text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.",
"title": ""
},
{
"docid": "34992b86a8ac88c5f5bbca770954ae61",
"text": "Entity search over text corpora is not geared for relationship queries where answers are tuples of related entities and where a query often requires joining cues from multiple documents. With large knowledge graphs, structured querying on their relational facts is an alternative, but often suffers from poor recall because of mismatches between user queries and the knowledge graph or because of weakly populated relations.\n This paper presents the TriniT search engine for querying and ranking on extended knowledge graphs that combine relational facts with textual web contents. Our query language is designed on the paradigm of SPO triple patterns, but is more expressive, supporting textual phrases for each of the SPO arguments. We present a model for automatic query relaxation to compensate for mismatches between the data and a user's query. Query answers -- tuples of entities -- are ranked by a statistical language model. We present experiments with different benchmarks, including complex relationship queries, over a combination of the Yago knowledge graph and the entity-annotated ClueWeb'09 corpus.",
"title": ""
},
{
"docid": "fbbc7080f9c235c3f696f6fb78714771",
"text": "Powered exoskeletons can provide motion enhancement for both healthy and physically challenged people. Compared with lower limb exoskeletons, upper limb exoskeletons are required to have multiple degrees-of-freedom and can still produce sufficient augmentation force. Designs using serial mechanisms usually result in complicated and bulky exoskeletons that prevent themselves from being wearable. This paper presents a new exoskeleton aimed to achieve compactness and wearability. We consider a shoulder exoskeleton that consists of two spherical mechanisms with two slider crank mechanisms. The actuators can be made stationary and attached side-by-side, close to a human body. Thus better inertia properties can be obtained while maintaining lightweight. The dimensions of the exoskeleton are synthesized to achieve maximum output force. Through illustrations of a prototype, the exoskeleton is shown to be wearable and can provide adequate motion enhancement of a human's upper limb.",
"title": ""
},
{
"docid": "79685eeb67edbb3fbb6e6340fac420c3",
"text": "Fatma Özcan IBM Almaden Research Center San Jose, CA fozcan@us.ibm.com Nesime Tatbul Intel Labs and MIT Cambridge, MA tatbul@csail.mit.edu Daniel J. Abadi Yale University New Haven, CT dna@cs.yale.edu Marcel Kornacker Cloudera San Francisco, CA marcel@cloudera.com C Mohan IBM Almaden Research Center San Jose, CA cmohan@us.ibm.com Karthik Ramasamy Twitter, Inc. San Francisco, CA karthik@twitter.com Janet Wiener Facebook, Inc. Menlo Park, CA jlw@fb.com",
"title": ""
},
{
"docid": "e42805b57fa2f8f95d03fea8af2e8560",
"text": "Models are used in a variety of fields, including land change science, to better understand the dynamics of systems, to develop hypotheses that can be tested empirically, and to make predictions and/or evaluate scenarios for use in assessment activities. Modeling is an important component of each of the three foci outlined in the science plan of the Land-use and -cover change (LUCC) project (Turner et al. 1995) of the International Geosphere-Biosphere Program (IGBP) and the International Human Dimensions Program (IHDP). In Focus 1, on comparative land-use dynamics, models are used to help improve our understanding of the dynamics of land-use that arise from human decision-making at all levels, households to nations. These models are supported by surveys and interviews of decision makers. Focus 2 emphasizes development of empirical diagnostic models based on aerial and satellite observations of spatial and temporal land-cover dynamics. Finally, Focus 3 focuses specifically on the development of models of land-use and -cover change (LUCC) that can be used for prediction and scenario generation in the context of integrative assessments of global change.",
"title": ""
},
{
"docid": "a525ba232412bcab7885c54ae7932fa3",
"text": "Deep recurrent neural networks have been successfully applied to knowledge tracing, namely, deep knowledge tracing (DKT), which aims to automatically trace students’ knowledge states by mining their exercise performance data. Two main issues exist in the current DKT models: First, the complexity of the DKT models increases the tension of psychological interpretation. Second, the input of existing DKT models is only the exercise tags representing via one-hot encoding. The correlation between the hidden knowledge components and students’ responses to the exercises heavily relies on training the DKT models. The existing rich and informative features are excluded in the training, which may yield sub-optimal performance. To utilize the information embedded in these features, researchers have proposed a manual method to pre-process the features, i.e., discretizing them based on the inner characteristics of individual features. However, the proposed method requires many feature engineering efforts and is infeasible when the selected features are huge. To tackle the above issues, we design an automatic system to embed the heterogeneous features implicitly and effectively into the original DKT model. More specifically, we apply tree-based classifiers to predict whether the student can correctly answer the exercise given the heterogeneous features, an effective way to capture how the student deviates from others in the exercise. The predicted response and the true response are then encoded into a 4-bit one-hot encoding and concatenated with the original one-hot encoding features on the exercise tags to train a long short-term memory (LSTM) model, which can output the probability that a student will answer the exercise correctly on the corresponding exercise. We conduct a thorough evaluation on two educational datasets and demonstrate the merits and observations of our proposal.",
"title": ""
},
{
"docid": "1fd87c65968630b6388985a41b7890ce",
"text": "Cyber Defense Exercises have received much attention in recent years, and are increasingly becoming the cornerstone for ensuring readiness in this new domain. Crossed Swords is an exercise directed at training Red Team members for responsive cyber defense. However, prior iterations have revealed the need for automated and transparent real-time feedback systems to help participants improve their techniques and understand technical challenges. Feedback was too slow and players did not understand the visibility of their actions. We developed a novel and modular open-source framework to address this problem, dubbed Frankenstack. We used this framework during Crossed Swords 2017 execution and evaluated its effectiveness by interviewing participants and conducting an online survey. Due to the novelty of Red Team-centric exercises, very little academic research exists on providing real-time feedback during such exercises. Thus, this paper serves as a first foray into a novel research field.",
"title": ""
},
{
"docid": "783e003838f327c9cabe128b965dfe4d",
"text": "To assess original research addressing the effect of the application of compression clothing on sport performance and recovery after exercise, a computer-based literature research was performed in July 2011 using the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science. Studies examining the effect of compression clothing on endurance, strength and power, motor control, and physiological, psychological, and biomechanical parameters during or after exercise were included, and means and measures of variability of the outcome measures were recorded to estimate the effect size (Hedges g) and associated 95% confidence intervals for comparisons of experimental (compression) and control trials (noncompression). The characteristics of the compression clothing, participants, and study design were also extracted. The original research from peer-reviewed journals was examined using the Physiotherapy Evidence Database (PEDro) Scale. Results indicated small effect sizes for the application of compression clothing during exercise for short-duration sprints (10-60 m), vertical-jump height, extending time to exhaustion (such as running at VO2max or during incremental tests), and time-trial performance (3-60 min). When compression clothing was applied for recovery purposes after exercise, small to moderate effect sizes were observed in recovery of maximal strength and power, especially vertical-jump exercise; reductions in muscle swelling and perceived muscle pain; blood lactate removal; and increases in body temperature. These results suggest that the application of compression clothing may assist athletic performance and recovery in given situations with consideration of the effects magnitude and practical relevance.",
"title": ""
},
{
"docid": "b466803c9a9be5d38171ece8d207365e",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "bc6cbf7da118c01d74914d58a71157ac",
"text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.",
"title": ""
},
{
"docid": "05dc76d17fea57d22de982f9590e386b",
"text": "Hierarchical multi-label classification assigns a document to multiple hierarchical classes. In this paper we focus on hierarchical multi-label classification of social text streams. Concept drift, complicated relations among classes, and the limited length of documents in social text streams make this a challenging problem. Our approach includes three core ingredients: short document expansion, time-aware topic tracking, and chunk-based structural learning. We extend each short document in social text streams to a more comprehensive representation via state-of-the-art entity linking and sentence ranking strategies. From documents extended in this manner, we infer dynamic probabilistic distributions over topics by dividing topics into dynamic \"global\" topics and \"local\" topics. For the third and final phase we propose a chunk-based structural optimization strategy to classify each document into multiple classes. Extensive experiments conducted on a large real-world dataset show the effectiveness of our proposed method for hierarchical multi-label classification of social text streams.",
"title": ""
},
{
"docid": "e33dd9c497488747f93cfcc1aa6fee36",
"text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.",
"title": ""
},
{
"docid": "2f7b1f2422526d99e75dce7d38665774",
"text": "Conventional Open Information Extraction (Open IE) systems are usually built on hand-crafted patterns from other NLP tools such as syntactic parsing, yet they face problems of error propagation. In this paper, we propose a neural Open IE approach with an encoder-decoder framework. Distinct from existing methods, the neural Open IE approach learns highly confident arguments and relation tuples bootstrapped from a state-of-the-art Open IE system. An empirical study on a large benchmark dataset shows that the neural Open IE system significantly outperforms several baselines, while maintaining comparable computational efficiency.",
"title": ""
}
] |
scidocsrr
|
2a74f9acdbdc09a26ccf8a575f9eb691
|
Evaluating and analyzing the performance of RPL in contiki
|
[
{
"docid": "4b9695da76b4ab77139549a4b444dae7",
"text": "Wireless Sensor Network (WSN) is one of the key technologies of 21st century, while it is a very active and challenging research area. It seems that in the next coming year, thanks to 6LoWPAN, these wireless micro-sensors will be embedded in everywhere, because 6LoWPAN enables P2P connection between wireless nodes over IPv6. Nowadays different implementations of 6LoWPAN stacks are available so it is interesting to evaluate their performance in term of memory footprint and compliant with the RFC4919 and RFC4944. In this paper, we present a survey on the state-of-art of the current implementation of 6LoWPAN stacks such as uIP/Contiki, SICSlowpan, 6lowpancli, B6LoWPAN, BLIP, NanoStack and Jennic's stack. The key features of all these 6LoWPAN stacks will be established. Finally, we discuss the evolution of the current implementations of 6LoWPAN stacks.",
"title": ""
},
{
"docid": "a231d6254a136a40625728d7e14d7844",
"text": "This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document describes the frame format for transmission of IPv6 packets and the method of forming IPv6 link-local addresses and statelessly autoconfigured addresses on IEEE 802.15.4 networks. Additional specifications include a simple header compression scheme using shared context and provisions for packet delivery in IEEE 802.15.4 meshes.",
"title": ""
},
{
"docid": "fdd998012aa9b76ba9fe4477796ddebb",
"text": "Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99% of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations.",
"title": ""
}
] |
[
{
"docid": "4a572df21f3a8ebe3437204471a1fd10",
"text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.",
"title": ""
},
{
"docid": "882f463d187854967709c95ecd1d2fc1",
"text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. We utilize different resolutions of feature maps in the network to detect object instances of various sizes. Specifically, we divide the anchor candidates into three clusters based on the scale size and place them on feature maps of distinct strides to detect small, medium and large objects, respectively. Deeper feature maps contain region-level semantics which can help shallow counterparts to identify small objects. Therefore we design a zoom-in sub-network to increase the resolution of high level features via a deconvolution operation. The high-level features with high resolution are then combined and merged with low-level features to detect objects. Furthermore, we devise a recursive training pipeline to consecutively regress region proposals at the training stage in order to match the iterative regression at the testing stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET and MS COCO datasets, where our algorithm performs better than the state-of-the-arts in various evaluation metrics. It also increases average precision by around 2% in the detection system.",
"title": ""
},
{
"docid": "6dc4cefb15977ba4b4f33f7ce792196a",
"text": "Fuel cells convert chemical energy directly into electrical energy with high efficiency and low emission of pollutants. However, before fuel-cell technology can gain a significant share of the electrical power market, important issues have to be addressed. These issues include optimal choice of fuel, and the development of alternative materials in the fuel-cell stack. Present fuel-cell prototypes often use materials selected more than 25 years ago. Commercialization aspects, including cost and durability, have revealed inadequacies in some of these materials. Here we summarize recent progress in the search and development of innovative alternative materials.",
"title": ""
},
{
"docid": "704611db1aea020103b093a2156cd94d",
"text": "With the growing number of wearable devices and applications, there is an increasing need for a flexible body channel communication (BCC) system that supports both scalable data rate and low power operation. In this paper, a highly flexible frequency-selective digital transmission (FSDT) transmitter that supports both data scalability and low power operation with the aid of two novel implementation methods is presented. In an FSDT system, data rate is limited by the number of Walsh spreading codes available for use in the optimal body channel band of 40-80 MHz. The first method overcomes this limitation by applying multi-level baseband coding scheme to a carrierless FSDT system to enhance the bandwidth efficiency and to support a data rate of 60 Mb/s within a 40-MHz bandwidth. The proposed multi-level coded FSDT system achieves six times higher data rate as compared to other BCC systems. The second novel implementation method lies in the use of harmonic frequencies of a Walsh encoded FSDT system that allows the BCC system to operate in the optimal channel bandwidth between 40-80 MHz with half the clock frequency. Halving the clock frequency results in a power consumption reduction of 32%. The transmitter was fabricated in a 65-nm CMOS process. It occupies a core area of 0.24 × 0.3 mm 2. When operating under a 60-Mb/s data-rate mode, the transmitter consumes 1.85 mW and it consumes only 1.26 mW when operating under a 5-Mb/s data-rate mode.",
"title": ""
},
{
"docid": "c95894477d7279deb7ddbb365030c34e",
"text": "Among mammals living in social groups, individuals form communication networks where they signal their identity and social status, facilitating social interaction. In spite of its importance for understanding of mammalian societies, the coding of individual-related information in the vocal signals of non-primate mammals has been relatively neglected. The present study focuses on the spotted hyena Crocuta crocuta, a social carnivore known for its complex female-dominated society. We investigate if and how the well-known hyena's laugh, also known as the giggle call, encodes information about the emitter. By analyzing acoustic structure in both temporal and frequency domains, we show that the hyena's laugh can encode information about age, individual identity and dominant/subordinate status, providing cues to receivers that could enable assessment of the social position of an emitting individual. The range of messages encoded in the hyena's laugh is likely to play a role during social interactions. This call, together with other vocalizations and other sensory channels, should ensure an array of communication signals that support the complex social system of the spotted hyena. Experimental studies are now needed to decipher precisely the communication network of this species.",
"title": ""
},
{
"docid": "e54a0387984553346cf718a6fbe72452",
"text": "Learning distributed representations for relation instances is a central technique in downstream NLP applications. In order to address semantic modeling of relational patterns, this paper constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset (Zeichner et al., 2012). In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, and GRU for composing distributed representations of relational patterns. We also present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task.",
"title": ""
},
{
"docid": "49e91d22adb0cdeb014b8330e31f226d",
"text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.",
"title": ""
},
{
"docid": "5d15ba47aaa29f388328824fa592addc",
"text": "Breast cancer continues to be a significant public health problem in the world. The diagnosing mammography method is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult for radiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. This paper describes a new method for mammographic image enhancement and denoising based on wavelet transform and homomorphic filtering. The mammograms are acquired from the Faculty of Medicine of the University of Akdeniz and the University of Istanbul in Turkey. Firstly wavelet transform of the mammograms is obtained and the approximation coefficients are filtered by homomorphic filter. Then the detail coefficients of the wavelet associated with noise and edges are modeled by Gaussian and Laplacian variables, respectively. The considered coefficients are compressed and enhanced using these variables with a shrinkage function. Finally using a proposed adaptive thresholding the fine details of the mammograms are retained and the noise is suppressed. The preliminary results of our work indicate that this method provides much more visibility for the suspicious regions.",
"title": ""
},
{
"docid": "e34815efa68cb1b7a269e436c838253d",
"text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).",
"title": ""
},
{
"docid": "22c3f3b7658f93030601ab22e5028d1f",
"text": "This paper presents a new 3D shape representation and classification methodology developed for use in craniofacial dysmorphology studies. The methodology computes low-level features at each point of a 3D mesh representation, aggregates the features into histograms over mesh neighborhoods, learns the characteristics of salient point histograms for each particular application, and represents the points in a 2D spatial map based on a longitude–latitude transformation. Experimental results on the medical classification tasks show that our methodology achieves higher classification accuracy compared to medical experts and existing state-of-the-art 3D descriptors. Additional experimental results highlight the strength and advantage of the flexible framework that allows the methodology to generalize from specific medical classification tasks to general 3D object classification tasks. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "033d7d924481a9429c03bb4bcc7b12fc",
"text": "BACKGROUND\nThis study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection.\n\n\nMETHODS\n42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA).\n\n\nRESULTS\nAlmost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively.\n\n\nCONCLUSIONS\nThe results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.",
"title": ""
},
{
"docid": "94cf1976c10d632cfce12ce3f32be4cc",
"text": "In today’s economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, investigative support, provider lock-in and disaster recovery. We focus on risk and control analysis in relation to a sample of Swiss companies with regard to their prospective adoption of public cloud services. We observe a sufficient degree of risk awareness with a focus on those risks that are relevant to the IT function to be migrated to the cloud. Moreover, the recommendations as to the adoption of cloud services depend on the company’s size with larger and more technologically advanced companies being better prepared for the cloud. As an exploratory first step, the results of this study would allow us to design and implement broader research into cloud computing risk management in Switzerland.",
"title": ""
},
{
"docid": "c1ddefd126c6d338c4cd9238e9067435",
"text": "Tensor networks are efficient representations of high-dimensional tensors which have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images. For the MNIST data set we obtain less than 1% test set classification error. We discuss how the tensor network form imparts additional structure to the learned model and suggest a possible generative interpretation.",
"title": ""
},
{
"docid": "f97093a848329227f363a8a073a6334a",
"text": "With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.",
"title": ""
},
{
"docid": "debea8166d89cd6e43d1b6537658de96",
"text": "The emergence of SDNs promises to dramatically simplify network management and enable innovation through network programmability. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the key concern and is an equally striking challenge that reduces the growth of SDNs. Moreover, the deployment of novel entities and the introduction of several architectural components of SDNs pose new security threats and vulnerabilities. Besides, the landscape of digital threats and cyber-attacks is evolving tremendously, considering SDNs as a potential target to have even more devastating effects than using simple networks. Security is not considered as part of the initial SDN design; therefore, it must be raised on the agenda. This article discusses the state-of-the-art security solutions proposed to secure SDNs. We classify the security solutions in the literature by presenting a thematic taxonomy based on SDN layers/interfaces, security measures, simulation environments, and security objectives. Moreover, the article points out the possible attacks and threat vectors targeting different layers/interfaces of SDNs. The potential requirements and their key enablers for securing SDNs are also identified and presented. Also, the article gives great guidance for secure and dependable SDNs. Finally, we discuss open issues and challenges of SDN security that may be deemed appropriate to be tackled by researchers and professionals in the future.",
"title": ""
},
{
"docid": "0ec0af632612fbbc9b4dba1aa8843590",
"text": "The diversity in web object types and their resource requirements contributes to the unpredictability of web service provisioning. In this paper, an eÆcient admission control algorithm, PACERS, is proposed to provide di erent levels of services based on the server workload characteristics. Service quality is ensured by periodical allocation of system resources based on the estimation of request rate and service requirements of prioritized tasks. Admission of lower priority tasks is restricted during high load periods to prevent denial-of-services to high priority tasks. A doublequeue structure is implemented to reduce the e ects of estimation inaccuracy and to utilize the spare capacity of the server, thus increasing the system throughput. Response delays of the high priority tasks are bounded by the length of the prediction period. Theoretical analysis and experimental study show that the PACERS algorithm provides desirable throughput and bounded response delay to the prioritized tasks, without any signi cant impact on the aggregate throughput of the system under various workload.",
"title": ""
},
{
"docid": "f30d5e78d169868484eca015d946bd88",
"text": "In Hong Kong and Macao, horse racing is the most famous gambling with a long history. This study proposes a novel approach to predict the horse racing results in Hong Kong. A three-years-long race records dataset obtained from Hong Kong Jockey Club was used for training a support-vector-machine-based committee machine. Bet suggestions could be made to gamblers by studying previous data though machine learning. In experiment, there are 2691 races and 33532 horse records obtained. Experiments focus on accuracy and return rate were conducted separately through constructing a committee machine. Experimental results showed that the accuracy and return rate achieve 70.86% and 800,000% respectively.",
"title": ""
},
{
"docid": "7eed5e11e47807a3ff0af21461e88385",
"text": "We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). Each kernel learns a location of specialization along with its weights through standard backpropagation. A differentiable attention mechanism requiring no additional supervision is used to optimize the ROIs. Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. By restricting kernel ROIs, we reduce the number of sliding convolutional operations performed throughout the network in its forward pass, speeding up both training and inference. We evaluate our proposed architecture on both synthetic and natural tasks across multiple domains. TKNs obtain significant improvements over baselines, requiring less computation (around an order of magnitude) while achieving superior performance.",
"title": ""
},
{
"docid": "3dd755e5041b2b61ef63f65c7695db27",
"text": "The class imbalance problem is encountered in a large number of practical applications of machine learning and data mining, for example, information retrieval and filtering, and the detection of credit card fraud. It has been widely realized that this imbalance raises issues that are either nonexistent or less severe compared to balanced class cases and often results in a classifier's suboptimal performance. This is even more true when the imbalanced data are also high dimensional. In such cases, feature selection methods are critical to achieve optimal performance. In this paper, we propose a new feature selection method, Feature Assessment by Sliding Thresholds (FAST), which is based on the area under a ROC curve generated by moving the decision boundary of a single feature classifier with thresholds placed using an even-bin distribution. FAST is compared to two commonly-used feature selection methods, correlation coefficient and RELevance In Estimating Features (RELIEF), for imbalanced data classification. The experimental results obtained on text mining, mass spectrometry, and microarray data sets showed that the proposed method outperformed both RELIEF and correlation methods on skewed data sets and was comparable on balanced data sets; when small number of features is preferred, the classification performance of the proposed method was significantly improved compared to correlation and RELIEF-based methods.",
"title": ""
},
{
"docid": "a6e71e4be58c51b580fcf08e9d1a100a",
"text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.",
"title": ""
}
] |
scidocsrr
|
4a60fd48fb66eff7e155c0da3aafaa7d
|
A tractable numerical strategy for robust MILP and application to energy management
|
[
{
"docid": "84fa9ef68619e8237d6852a21cef5ae5",
"text": "We consider least-squares problems where the coefficient matrices A, b are unknown but bounded. We minimize the worst-case residual error using (convex) second-order cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomial-time using semidefinite programming (SDP). We also consider the case when A, b are rational functions of an unknown-but-bounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worst-case residual. We provide numerical examples, including one from robust identification and one from robust interpolation.",
"title": ""
},
{
"docid": "61e07fb9454e7ec2e21e779a70b001c7",
"text": "Optimal solutions of Linear Programming problems may become severely infeasible if the nominal data is slightly perturbed. We demonstrate this phenomenon by studying 90 LPs from the well-known NETLIB collection. We then apply the Robust Optimization methodology (Ben-Tal and Nemirovski [1–3]; El Ghaoui et al. [5,6]) to produce “robust” solutions of the above LPs which are in a sense immuned against uncertainty. Surprisingly, for the NETLIB problems these robust solutions nearly lose nothing in optimality.",
"title": ""
},
{
"docid": "f292b8666eb78e4d881777fee35123f7",
"text": "Abstract. We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0 − 1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n + 1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0 − 1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NP -hard α-approximable 0 − 1 discrete optimization problem, remains α-approximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.",
"title": ""
}
] |
[
{
"docid": "585445a760077e18a3e35d6916265514",
"text": "This paper offers a review of the literature on labour turnover in organizations. Initially, the importance of the subject area is established, as analyses of turnover are outlined and critiqued.This leads toadiscussionof thevariousways inwhich turnover and its consequences are measured. The potentially critical impact of turnover behaviour on organizational effectiveness is presented as justification for the need to model turnover, as a precursor to prediction and prevention. Key models from the literature of labour turnover are presented and critiqued.",
"title": ""
},
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
},
{
"docid": "f9f2903053946fba6133bb0f266acf42",
"text": "The biomedical sciences have experienced an explosion of data which promises to overwhelm many current practitioners. Without easy access to data science training resources, biomedical researchers may find themselves unable to wrangle their own datasets. In 2014, to address the challenges posed such a data onslaught, the National Institutes of Health (NIH) launched the Big Data to Knowledge (BD2K) initiative. To this end, the BD2K Training Coordinating Center (TCC; bigdatau.org) was funded to facilitate both in-person and online learning, and open up the concepts of data science to the widest possible audience. Here, we describe the activities of the BD2K TCC and its focus on the construction of the Educational Resource Discovery Index (ERuDIte), which identifies, collects, describes, and organizes online data science materials from BD2K awardees, open online courses, and videos from scientific lectures and tutorials. ERuDIte now indexes over 9,500 resources. Given the richness of online training materials and the constant evolution of biomedical data science, computational methods applying information retrieval, natural language processing, and machine learning techniques are required - in effect, using data science to inform training in data science. In so doing, the TCC seeks to democratize novel insights and discoveries brought forth via large-scale data science training.",
"title": ""
},
{
"docid": "03625364ccde0155f2c061b47e3a00b8",
"text": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.’s system (Pantel et al., 2007).",
"title": ""
},
{
"docid": "d09dddd8a678370375c30dd14b3f2482",
"text": "Deep learning on graphs and in particular, graph convolutional neural networks, have recently attracted significant attention in the machine learning community. Many of such techniques explore the analogy between the graph Laplacian eigenvectors and the classical Fourier basis, allowing to formulate the convolution as a multiplication in the spectral domain. One of the key drawback of spectral CNNs is their explicit assumption of an undirected graph, leading to a symmetric Laplacian matrix with orthogonal eigendecomposition. In this work we propose MotifNet, a graph CNN capable of dealing with directed graphs by exploiting local graph motifs. We present experimental evidence showing the advantage of our approach on real data.",
"title": ""
},
{
"docid": "e9959661af6e90ab26604d35385f32d1",
"text": "This paper presents an enhancement transient capless low dropout voltage regulator (LDO). To eliminate the external capacitor, the miller effect is implemented through the use of a current amplifier. The proposed regulator LDO provides a load current of 50 mA with a dropout voltage of 200 mV, consuming 14μA quiescent current at light loads, and the regulated output voltage is 1.6 V with an input voltage range from 1.2 to 1.8 V. The proposed system is designed in 0.18 μm CMOS technology. A folded cascode amplifier with high transconductance and high power efficiency is proposed to improve the transient response of the LDO. In addition, multiloop feedback strategy employs a direct dynamic biasing technique to provide a high speed path during the load transient responses. The simulation results presented in this paper will be compared with other results of SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
},
{
"docid": "4100daf390502bf3e6fe5aa3c313afb8",
"text": "Visual information retrieval (VIR) is an active and vibrant research area, which attempts at providing means for organizing, indexing, annotating, and retrieving visual information (images and videos) form large, unstructured repositories. The goal of VIR is to retrieve the highest number of relevant matches to a given query (often expressed as an example image and/or a series of keywords). In its early years (1995-2000) the research efforts were dominated by content-based approaches contributed primarily by the image and video processing community. During the past decade, it was widely recognized that the challenges imposed by the semantic gap (the lack of coincidence between an image's visual contents and its semantic interpretation) required a clever use of textual metadata (in addition to information extracted from the image's pixel contents) to make image and video retrieval solutions efficient and effective. The need to bridge (or at least narrow) the semantic gap has been one of the driving forces behind current VIR research. Additionally, other related research problems and market opportunities have started to emerge, offering a broad range of exciting problems for computer scientists and engineers to work on. In this tutorial, we present an overview of visual information retrieval (VIR) concepts, techniques, algorithms, and applications. Several topics are supported by examples written in Java, using Lucene (an open-source Java-based indexing and search implementation) and LIRE (Lucene Image REtrieval), an open-source Java-based library for content-based image retrieval (CBIR) written by Mathias Lux.\n After motivating the topic, we briefly review the fundamentals of information retrieval, present the most relevant and effective visual descriptors currently used in VIR, the most common indexing approaches for visual descriptors, the most prominent machine learning techniques used in connection with contemporary VIR solutions, as well as the challenges associated with building real-world, large scale VIR solutions, including a brief overview of publicly available datasets used in worldwide challenges, contests, and benchmarks. Throughout the tutorial, we integrate examples using LIRE, whose main features and design principles are also discussed. Finally, we conclude the tutorial with suggestions for deepening the knowledge in the topic, including a brief discussion of the most relevant advances, open challenges, and promising opportunities in VIR and related areas.\n The tutorial is primarily targeted at experienced Information Retrieval researchers and practitioners interested in extending their knowledge of document-based IR to equivalent concepts, techniques, and challenges in VIR. The acquired knowledge should allow participants to derive insightful conclusions and promising avenues for further investigation.",
"title": ""
},
{
"docid": "e0a2031394922edec46eaac60c473358",
"text": "In-wheel-motor drive electric vehicle (EV) is an innovative configuration, in which each wheel is driven individually by an electric motor. It is possible to use an electronic differential (ED) instead of the heavy mechanical differential because of the fast response time of the motor. A new ED control approach for a two-in-wheel-motor drive EV is devised based on the fuzzy logic control method. The fuzzy logic method employs to estimate the slip rate of each wheel considering the complex and nonlinear of the system. Then, the ED system distributes torque and power to each motor according to requirements. The effectiveness and validation of the proposed control method are evaluated in the Matlab/Simulink environment. Simulation results show that the new ED control system can keep the slip rate within the optimized range, ensuring the stability of the vehicle either in a straight or a curve lane.",
"title": ""
},
{
"docid": "d2305c7218a9e2bb52c7b9828bb8cdb4",
"text": "The World Wide Web, and online social networks in particular, have increased connectivity between people such that information can spread to millions of people in a matter of minutes. This form of online collective contagion has provided many benefits to society, such as providing reassurance and emergency management in the immediate aftermath of natural disasters. However, it also poses a potential risk to vulnerable Web users who receive this information and could subsequently come to harm. One example of this would be the spread of suicidal ideation in online social networks, about which concerns have been raised. In this paper we report the results of a number of machine classifiers built with the aim of classifying text relating to suicide on Twitter. The classifier distinguishes between the more worrying content, such as suicidal ideation, and other suicide-related topics such as reporting of a suicide, memorial, campaigning and support. It also aims to identify flippant references to suicide. We built a set of baseline classifiers using lexical, structural, emotive and psychological features extracted from Twitter posts. We then improved on the baseline classifiers by building an ensemble classifier using the Rotation Forest algorithm and a Maximum Probability voting classification decision method, based on the outcome of base classifiers. This achieved an F-measure of 0.728 overall (for 7 classes, including suicidal ideation) and 0.69 for the suicidal ideation class. We summarise the results by reflecting on the most significant predictive principle components of the suicidal ideation class to provide insight into the language used on Twitter to express suicidal ideation.",
"title": ""
},
{
"docid": "47d673d7b917f3948274f1e32a847a35",
"text": "Real-time lane detection and tracking is one of the most reliable approaches to prevent road accidents by alarming the driver of the excessive lane changes. This paper addresses the problem of correct lane detection and tracking of the current lane of a vehicle in real-time. We propose a solution that is computationally efficient and performs better than previous approaches. The proposed algorithm is based on detecting straight lines from the captured road image, marking a region of interest, filtering road marks and detecting the current lane by using the information gathered. This information is obtained by analyzing the geometric shape of the lane boundaries and the convergence point of the lane markers. To provide a feasible solution, the only sensing modality on which the algorithm depends on is the camera of an off-the-shelf mobile device. The proposed algorithm has a higher average accuracy of 96.87% when tested on the Caltech Lanes Dataset as opposed to the state-of-the-art technology for lane detection. The algorithm operates on three frames per second on a 2.26 GHz quad-core processor of a mobile device with an image resolution of 640×480 pixels. It is tested and verified under various visibility and road conditions.",
"title": ""
},
{
"docid": "6ebce4adb3693070cac01614078d68fc",
"text": "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.",
"title": ""
},
{
"docid": "b6f32f675e1a9209aba6f361ecdd9a37",
"text": "Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data. In this paper, we show that augmenting training data with sentences containing artificially-introduced grammatical errors can make the system more robust to such errors. In combination with an automatic grammar error correction system, we can recover 1.9 BLEU out of 3.1 BLEU lost due to grammatical errors. We also present a set of Spanish translations of the JFLEG grammar error correction corpus, which allows for testing NMT robustness to real grammatical errors.",
"title": ""
},
{
"docid": "a0a9785ee7688a601e678b4b8d40cb91",
"text": "We present a light-weight machine learning tool for NLP research. The package supports operations on both discrete and dense vectors, facilitating implementation of linear models as well as neural models. It provides several basic layers which mainly aims for single-layer linear and non-linear transformations. By using these layers, we can conveniently implement linear models and simple neural models. Besides, this package also integrates several complex layers by composing those basic layers, such as RNN, Attention Pooling, LSTM and gated RNN. Those complex layers can be used to implement deep neural models directly.",
"title": ""
},
{
"docid": "207d3e95d3f04cafa417478ed9133fcc",
"text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "99c87b7b01f1bb42b987104b5cb4341f",
"text": "The development of future software for proposed new computer architectures is expected to make significant demands on compiler technologies. Significant rewriting of applications will be likely to support the use of new hardware features and preserve the current software investment. Existing compilers provide little support for the broad range of research efforts addressing exascale challenges, such as parallelism, locality, resiliency, power efficiency, etc. The economics of how new machines will change an existing code base of software, that is too expensive to manually rewrite, may well drive automated mechanisms to transform existing software to take advantage of future machine features. This approach will lessen the cost and delay of the move to new, and possibly radically different, future architectures. Source-to-source compilers provides a pragmatic vehicle to support research, development, and deployment of novel compiler technologies by compiler experts or even advanced application developers. Within a source-to-source approach the input source code is read by the compiler, an internal representation (IR) is constructed, the IR is the basis of analysis that is used to guide transformations, the transformations occur on the IR, the IR is used to regenerate new source code, which is then compiled by a backend compiler. Our source-to-source compiler, ROSE, is a project to support the requirements of DOE. Work on ROSE has focused on the development of a community based project to define source-to-source compilation for a broad range of languages especially targeted at DOE applications (addressing robustness and large scale codes as required for DOE applications). Novel research areas are most easily supported when they can leverage significant tool chains that interact and use source code while allowing the hardware vendor’s own compiler for low level optimizations. In fact, high level optimization are rarely feasible for existing low level compilers for common languages such as C, C++, and Fortran. ROSE addresses the economics of how compiler research can be moved closer to the audience with significant technical performance problems and for whom the hardware is likely to be changing significantly in the next decade. Within ROSE it is less the goal to solve all problems than to permit domain experts to better solve their own problems. This talk will focus on the design and motivation for ROSE as an open community source-to-source compiler infrastructure to support performance optimization, tools for analysis, verification and software assurance, and general cus[Copyright notice will appear here once ’preprint’ option is removed.] tom analysis and transformations needs directly on software using the languages common within DOE High Performance Computing.",
"title": ""
},
{
"docid": "255d3c9f0a3f72eeae80ae3500f85116",
"text": "We have developed an adaptive real-time road detection application based on neural networks for autonomous driving. By taking advantage of the unique structure in road images, the network training can be processed while the system is running. The algorithm employs color features derived from color histograms. We have focused on the automatic adaptation of the system, which has reduced manual road annotations by human.",
"title": ""
},
{
"docid": "ae468573cd37e4f3bf923d76bc9f0779",
"text": "This paper integrates recent work on Path Integral (PI) and Kullback Leibler (KL) divergence stochastic optimal control theory with earlier work on risk sensitivity and the fundamental dualities between free energy and relative entropy. We derive the path integral optimal control framework and its iterative version based on the aforemetioned dualities. The resulting formulation of iterative path integral control is valid for general feedback policies and in contrast to previous work, it does not rely on pre-specified policy parameterizations. The derivation is based on successive applications of Girsanov's theorem and the use of Radon-Nikodým derivative as applied to diffusion processes due to the change of measure in the stochastic dynamics. We compare the PI control derived based on Dynamic Programming with PI based on the duality between free energy and relative entropy. Moreover we extend our analysis on the applicability of the relationship between free energy and relative entropy to optimal control of markov jump diffusions processes. Furthermore, we present the links between KL stochastic optimal control and the aforementioned dualities and discuss its generalizability.",
"title": ""
},
{
"docid": "7e7ba0025d19a0eb73c22ceb1eaddcee",
"text": "This is a landmark book. For anyone interested in language, in dictionaries and thesauri, or natural language processing, the introduction, Chapters 14, and Chapter 16 are must reading. (Select other chapters according to your special interests; see the chapter-by-chapter review). These chapters provide a thorough introduction to the preeminent electronic lexical database of today in terms of accessibility and usage in a wide range of applications. But what does that have to do with digital libraries? Natural language processing is essential for dealing efficiently with the large quantities of text now available online: fact extraction and summarization, automated indexing and text categorization, and machine translation. Another essential function is helping the user with query formulation through synonym relationships between words and hierarchical and other relationships between concepts. WordNet supports both of these functions and thus deserves careful study by the digital library community.",
"title": ""
},
{
"docid": "708ff6ba9b6e593b9cb693ec65916767",
"text": "The emergence of antibiotic resistance mechanisms among bacterial pathogens increases the demand for novel treatment strategies. Lately, the contribution of non-coding RNAs to antibiotic resistance and their potential value as drug targets became evident. RNA attenuator elements in mRNA leader regions couple expression of resistance genes to the presence of the cognate antibiotic. Trans-encoded small RNAs (sRNAs) modulate antibiotic tolerance by base-pairing with mRNAs encoding functions important for resistance such as metabolic enzymes, drug efflux pumps, or transport proteins. Bacteria respond with extensive changes of their sRNA repertoire to antibiotics. Each antibiotic generates a unique sRNA profile possibly causing downstream effects that may help to overcome the antibiotic challenge. In consequence, regulatory RNAs including sRNAs and their protein interaction partners such as Hfq may prove useful as targets for antimicrobial chemotherapy. Indeed, several compounds have been developed that kill bacteria by mimicking ligands for riboswitches controlling essential genes, demonstrating that regulatory RNA elements are druggable targets. Drugs acting on sRNAs are considered for combined therapies to treat infections. In this review, we address how regulatory RNAs respond to and establish resistance to antibiotics in bacteria. Approaches to target RNAs involved in intrinsic antibiotic resistance or virulence for chemotherapy will be discussed.",
"title": ""
}
] |
scidocsrr
|
ebdf8d7a2ab97155e9f6d276884942fc
|
AOD-Net: All-in-One Dehazing Network
|
[
{
"docid": "b2c265eb287b95bf87ecf38a5a4aa97b",
"text": "Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible.\n In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information.\n An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances.",
"title": ""
},
{
"docid": "e0096ccfc6d627faffcd676eaebbb532",
"text": "The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multiscale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.",
"title": ""
}
] |
[
{
"docid": "55b3fe6f2b93fd958d0857b485927bc9",
"text": "In this paper, in order to satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy during high-speed, high-acceleration tracking motions of a 3-degree-of-freedom (3-DOF) planar parallel manipulator, we propose a new control approach, termed convex synchronized (C-S) control. This control strategy is based on the so-called convex combination method, in which the synchronized control method is adopted. Through the adoption of a set of n synchronized controllers, each of which is tuned to satisfy at least one of a set of n closed-loop performance specifications, the resultant set of n closed-loop transfer functions are combined in a convex manner, from which a C-S controller is solved algebraically. Significantly, the resultant C-S controller simultaneously satisfies all n closed-loop performance specifications. Since each synchronized controller is only required to satisfy at least one of the n closed-loop performance specifications, the convex combination method is more efficient than trial-and-error methods, where the gains of a single controller are tuned to satisfy all n closed-loop performance specifications simultaneously. Furthermore, during the design of each synchronized controller, a feedback signal, termed the synchronization error, is employed. Different from the traditional tracking errors, this synchronization error represents the degree of coordination of the active joints in the parallel manipulator based on the manipulator kinematics. As a result, the trajectory tracking accuracy of each active joint and that of the manipulator end-effector is improved. Thus, possessing both the advantages of the convex combination method and synchronized control, the proposed C-S control method can satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy. In addition, unavoidable dynamic modeling errors are addressed through the introduction of a robust performance specification, which ensures that all performance specifications are satisfied despite allowable variations in dynamic parameters, or modeling errors. Experiments conducted on a 3-DOF P-R-R-type planar parallel manipulator demonstrate the aforementioned claims.",
"title": ""
},
{
"docid": "102ed07783d46a8ebadcad4b30ccb3c8",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "6ea55a91df6f65ff9a52a793d09fadeb",
"text": "Many applications of Reservoir Computing (and other signal processing techniques) have to deal with information processing of signals with multiple time-scales. Classical Reservoir Computing approaches can only cope with multiple frequencies to a limited degree. In this work we investigate reservoirs build of band-pass filter neurons which can be made sensitive to a specified frequency band. We demonstrate that many currently difficult tasks for reservoirs can be handled much better by a band-pass filter reservoir.",
"title": ""
},
{
"docid": "65dd0e6e143624c644043507cf9465a7",
"text": "Let G \" be a non-directed graph having n vertices, without parallel edges and slings. Let the vertices of Gn be denoted by F 1 ,. . ., Pn. Let v(P j) denote the valency of the point P i and put (0. 1) V(G,) = max v(Pj). 1ninn Let E(G.) denote the number of edges of Gn. Let H d (n, k) denote the set of all graphs Gn for which V (G n) = k and the diameter D (Gn) of which is-d, In the present paper we shall investigate the quantity (0 .2) Thus we want to determine the minimal number N such that there exists a graph having n vertices, N edges and diameter-d and the maximum of the valencies of the vertices of the graph is equal to k. To help the understanding of the problem let us consider the following interpretation. Let be given in a country n airports ; suppose we want to plan a network of direct flights between these airports so that the maximal number of airports to which a given airport can be connected by a direct flight should be equal to k (i .e. the maximum of the capacities of the airports is prescribed), further it should be possible to fly from every airport to any other by changing the plane at most d-1 times ; what is the minimal number of flights by which such a plan can be realized? For instance, if n = 7, k = 3, d= 2 we have F2 (7, 3) = 9 and the extremal graph is shown by Fig. 1. The problem of determining Fd (n, k) has been proposed and discussed recently by two of the authors (see [1]). In § 1 we give a short summary of the results of the paper [1], while in § 2 and 3 we give some new results which go beyond those of [1]. Incidentally we solve a long-standing problem about the maximal number of edges of a graph not containing a cycle of length 4. In § 4 we mention some unsolved problems. Let us mention that our problem can be formulated also in terms of 0-1 matrices as follows : Let M=(a il) be a symmetrical n by n zero-one matrix such 2",
"title": ""
},
{
"docid": "dc86db1c31883b31faa8159cc8ac116e",
"text": "Document images captured by a digital camera often suffer from serious geometric distortions. In this paper, we propose an active method to correct geometric distortions in a camera-captured document image. Unlike many passive rectification methods that rely on text-lines or features extracted from images, our method uses two structured beams illuminating upon the document page to recover two spatial curves. A developable surface is then interpolated to the curves by finding the correspondence between them. The developable surface is finally flattened onto a plane by solving a system of ordinary differential equations. Our method is a content independent approach and can restore a corrected document image of high accuracy with undistorted contents. Experimental results on a variety of real-captured document images demonstrate the effectiveness and efficiency of the proposed method.",
"title": ""
},
{
"docid": "6c1b18d0873266f99a210910354b836d",
"text": "Ethereum has emerged as a dynamic platform for exchanging cryptocurrency tokens. While token crowdsales cannot simultaneously guarantee buyers both certainty of valuation and certainty of participation, we show that if each token buyer specifies a desired purchase quantity at each valuation then everyone can successfully participate. Our implementation introduces smart contract techniques which recruit outside participants in order to circumvent computational complexity barriers. 1 A crowdsale dilemma This year has witnessed the remarkable rise of token crowdsales. Token incentives enable new community structures by employing novel combinations of currency rewards, software use rights, protocol governance, and traditional equity. Excluding Bitcoin, the total market cap of the token market surged over 60 billion USD in June 20171. Most tokens originate on the Ethereum network, and, at times, the network has struggled to keep up with purchase demands. On several occasions, single crowdsales have consumed the network’s entire bandwidth for consecutive hours. Token distributions can take many forms. Bitcoin, for example, continues to distribute tokens through a competitive, computational process known as mining. In this exposition, we shall concern ourselves exclusively https://coinmarketcap.com/charts/",
"title": ""
},
{
"docid": "faf76771bbb1f2a84148703d2bde9d25",
"text": "In this paper we describe the analysis of using Q-learning to acquire overtaking and blocking skills in simulated car racing games. Overtaking and blocking are more complicated racing skills compared to driving alone, and past work on this topic has only touched overtaking in very limited scenarios. Our work demonstrates that a driving AI agent can learn overtaking and blocking skills via machine learning, and the acquired skills are applicable when facing different opponent types and track characteristics, even on actual built-in tracks in TORCS.",
"title": ""
},
{
"docid": "f821aa7d68474665ba638dda2719925c",
"text": "This is a surprisingly entertaining and informative book and should be of interest to all who employ the services of specialists for the presentation of graphics, as well as the producers of such visual aids. It makes a strong case for properly designed and interesting graphics of a statistical nature, where content and integrity rather than artistic appeal should prevail. The initial chapter, “Graphical Excellence,” establishes basic ground rules which should be observed in the design and execution of statistical graphics, to facilitate the viewer’s understanding of complex data sets. The examples provided cover a broad spectrum of subject matter and treatment. Several of these were produced over a century ago, and are excellent examples in terms of ease of comprehension of complex statistical data. For the reader not familiar with the history of this topic, the graphics provided include some of the work of C. J. Minard, France, published in the mid-1800’s. These are quite impressive, treating subjects involving rather complex statistics with a quality of design and clarity that is rarely found today. I believe the inclusion of additional good examples of a more contemporary nature might possibly improve the value of this chapter, if a subsequent edition is ever considered. Two chapters are dedicated to the subject of graphical inte,gity, and there are lessons here for everyone, including the viewerhser of data graphics. While, unfortunately, there will always be a market for “Lying Graphics,” the employment of visual and statistical tricks for intentional false impressions, the author’s position is that this practice is detrimental and has inhibited acceptance of data graphics for applications where their full value lies. Another, and more widespread fault, which the author singles out, is the practice of leaving the basic graphic design to those with artistic backgrounds whose priorities are biased in favor of data beautification rather than statistical integrity. The remaining chapters deal with the theory, design, and execution of data graphics, and the treatment continues to be interesting, with both good and bad examples provided. Some of the specific issues addressed are the necessity for simplicity and clarity of presentation, the elimination of “chart junk,” maintaining the proper data-to-ink ratio, and the value of the tabular versus graphic presentations when limited data are to be shown. It appears that one of the author’s major goals is to increase the respectability of visual displays of statistical data, thereby broadening their acceptance and utilization for applications which are virtually unlimited. This book provides the rationale and the tools for the effective treatment and display of complex multivariate data, for those responsible for concepts as well as execution of statistical graphics. *",
"title": ""
},
{
"docid": "ccae12bd8e917c9898253472e369ae12",
"text": "Living in the middle of a global communication boom with vast usage of social media, the business environment has become more complicated. So, it is more difficult for marketers to create and increase brand awareness as they have to be able to coordinate messages and efforts across all the existing media to capture customers. Therefore, marketers have to consider these communication tools on branding process in the current competitive market-space. The purpose of this study is to evaluate the factors affecting on brand awareness through social media in Malaysia. Data for this study was obtained from 391 students of Universiti Putra Malaysia. The results indicate customer engagement, brand exposure, and electronic-word-of-mouth have positive correlation with brand awareness in the context of social media and the most effective factor is customer engagement. The study recommends that brands will be profited from social media in order to create and enhance brand awareness and the benefits will be mostly increased by using this media’s interactivity features to tie customers more closely to a brand.",
"title": ""
},
{
"docid": "1032d005c71dd6aa94a48575b6c0447c",
"text": "The major purpose of this paper is to develop a Web-based E-learning Platform for physical education. The Platform provides sports related courseware which includes physical motions, exercise rules and first-aid treatment. The courseware is represented using digital multimedia materials which include video, 2D animation and 3D virtual reality. Courseware within digital multimedia materials not only can increase the learning efficient but also inspires students’ strong interest in learning, especially in the area of Physical Education. The design concept of our project is based on ADDIE model with the five basic phases of analysis, design, development, implementation, and evaluation. Via the usage of this Web-based E-learning platform, user can learn the relative knowledge about sports at anytime and in everyplace. We hope to let players perform efficient self learning for sports skills, indirectly foster mutual help, cooperation, nice norms of law-abiding via the learning of exercise rules, and become skilled at accurate recreation knowledge and first-aid expertise. Moreover, coaches can use the system as a teaching facility to mitigate loading on teaching.",
"title": ""
},
{
"docid": "0dfd46719752d933c966b5e91006bc19",
"text": "A fall is an abnormal activity that occurs rarely, so it is hard to collect real data for falls. It is, therefore, difficult to use supervised learning methods to automatically detect falls. Another challenge in using machine learning methods to automatically detect falls is the choice of engineered features. In this paper, we propose to use an ensemble of autoencoders to extract features from different channels of wearable sensor data trained only on normal activities. We show that the traditional approach of choosing a threshold as the maximum of the reconstruction error on the training normal data is not the right way to identify unseen falls. We propose two methods for automatic tightening of reconstruction error from only the normal activities for better identification of unseen falls. We present our results on two activity recognition datasets and show the efficacy of our proposed method against traditional autoencoder models and two standard one-class classification methods.",
"title": ""
},
{
"docid": "5d04dd7d174cc1b1517035d26785c70f",
"text": "Folksonomies have become a powerful tool to describe, discover, search, and navigate online resources (e.g., pictures, videos, blogs) on the Social Web. Unlike taxonomies and ontologies, which impose a hierarchical categorisation on content, folksonomies directly allow end users to freely create and choose the categories (in this case, tags) that best describe a piece of information. However, the freedom afforded to users comes at a cost: as tags are defined informally, the retrieval of information becomes more challenging. Different solutions have been proposed to help users discover content in this highly dynamic setting. However, they have proved to be effective only for users who have already heavily used the system (active users) and who are interested in popular items (i.e., items tagged by many other users). In this thesis we explore principles to help both active users and more importantly new or inactive users (cold starters) to find content they are interested in even when this content falls into the long tail of medium-to-low popularity items (cold start items). We investigate the tagging behaviour of users on content and show how the similarities between users and tags can be used to produce better recommendations. We then analyse how users create new content on social tagging websites and show how preferences of only a small portion of active users (leaders), responsible for the vast majority of the tagged content, can be used to improve the recommender system’s scalability. We also investigate the growth of the number of users, items and tags in the system over time. We then show how this information can be used to decide whether the benefits of an update of the data structures modelling the system outweigh the corresponding cost. In this work we formalize the ideas introduced above and we describe their implementation. To demonstrate the improvements of our proposal in recommendation efficacy and efficiency, we report the results of an extensive evaluation conducted on three different social tagging websites: CiteULike, Bibsonomy and MovieLens. Our results demonstrate that our approach achieves higher accuracy than state-of-the-art systems for cold start users and for users searching for cold start items. Moreover, while accuracy of our technique is comparable to other techniques for active users, the computational cost that it requires is much smaller. In other words our approach is more scalable and thus more suitable for large and quickly growing settings.",
"title": ""
},
{
"docid": "e13bed064c1a6bf5d4045a9904cf0ccb",
"text": "Serotonin (5-hydroxytryptamine, 5-HT) contributes in multifarious ways to the regulation of brain function, spanning key aspects such as the sleep-wake cycle, appetite, mood and mental health. The 5-HT receptors comprise seven receptor families (5-HT1-7) that are further subdivided into 14 receptor subtypes. The role of the 5-HT receptor in the modulation of neuronal excitability has been well documented. Recently, however, it has become apparent that the 5-HT4 receptor may contribute significantly to cognition and regulates less ostensible aspects of brain function: it engages in metaplastic regulation of synaptic responsiveness in key brain structures such as the hippocampus, thereby specifically promoting persistent forms of synaptic plasticity, and influences the direction of change in synaptic strength in selected hippocampal subfields. This highly specific neuromodulatory control by the 5-HT4 receptor may in turn explain the reported role for this receptor in hippocampus-dependent cognition. In this review article, we describe the role of the 5-HT4 receptor in hippocampal function, and describe how this receptor plays a unique and highly specialised role in synaptic information storage and cognition.",
"title": ""
},
{
"docid": "fd58564a4a6fc087eecde8fcff36cd31",
"text": "An uniaxial bulk-micromachined piezoelectric MEMS accelerometer intended for high bandwidth application is fabricated and characterized. A circular seismic mass (radius = 1200 ¿m) is suspended by a 20 ¿m thick annular silicon membrane (radius = 1800 ¿m). A 24 ¿m PZT screen printed thick film is used as the sensing material on top of the silicon membrane. Accelerations in the out of plane direction induce a force on the seismic mass bending the membrane and a potential difference is measured in the out of plane direction of the stressed PZT. A resonance frequency of 23.50 kHz, a charge sensitivity of 0.23 pC/g and a voltage sensitivity of 0.24 mV/g are measured.",
"title": ""
},
{
"docid": "c2a43e38b988fd53bcc5b15329197bce",
"text": "The semantic analysis of documents is a domain of intense research at present. The works in this domain can take several directions and touch several levels of granularity. In the present work we are exactly interested in the thematic analysis of the textual documents. In our approach, we suggest studying the variation of the theme relevance within a text to identify the major theme and all the minor themes evoked in the text. This allows us at the second level of analysis to identify the relations of thematic associations in a textual corpus. Through the identification and the analysis of these association relations we suggest generating thematic paths allowing users, within the frame work of information search system, to explore the corpus according to their themes of interest and to discover new knowledge by navigating in the thematic association relations.",
"title": ""
},
{
"docid": "b1906b5510a15df533fc184f893483e7",
"text": "Building upon cloud, IoT and smart sensors technologies we design and develop an IoT as a Service (iTaaS) framework, that transforms a user’s mobile device (e.g. a smart phone) to an IoT gateway which allows for fast and efficient data streams transmission to the cloud. We develop a two-fold solution, based on micro-services for the IoT (users’ smart devices) and the cloud side (back-end services). iTaaS includes configurations for (a) the IoT side to support data collection from IoT devices to a gateway on a real time basis and, (b) the cloud back-end side to support data sharing, storage and processing. iTaaS provides the technology foreground to enable immediate application deployments in the domain of interest. An obvious and promising implementation of this technology is e-Health and remote health monitoring. As a proof of concept we implement a real time remote patient monitoring system that integrates the proposed framework and uses Bluetooth Low Energy (BLE) pulse oximeter and heart rate monitoring sensing devices. The experimental analysis shows fast data collection, as (for our experimental setup) data is transmitted from the IoT side (i.e. the gateway) to the cloud in less than 130 ms. We also stress the back-end system with high user concurrency (e.g. with 40 users per second) and high data streams (e.g. 240 data records per second) and we show that the requests are executed at around 1 s, a number that signifies a satisfactory performance by considering the number of requests, the network latency and the relatively small size of the Virtual Machines implementing services on the cloud (2 GB RAM, 1 CPU and 20 GB hard disk size). © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "900c1249cb3d877f57d16b2550f7db80",
"text": "In this paper, a monopulse slot array antenna working at Ka-band based on dual-layer substrate integrated waveguide (SIW) is presented. The 16×16 slot array antenna with feeding network and the monopulse sum-difference comparator is designed on separated layers which takes the advantage of compact size and high aperture efficiency. The honeycomb SIW (HCSIW) is adopted to design the novel right-angle corner and 90 degree 3-dB coupler. The simulated bandwidth (S11 of sum port < −10 dB) of the antenna is about 9.43%. The maximum gain of sum pattern at 35 GHz is around 26.1 dB with sidelobe suppression of −27 dB in both E-plane and H-plane. This proposed antenna presents a good candidate for the directional-finding systems which are restricted by aperture size.",
"title": ""
},
{
"docid": "aa0d6d4fb36c2a1d18dac0930e89179e",
"text": "The interest in biomass is increasing in the light of the growing concern about global warming and the resulting climate change. The emission of the greenhouse gas CO2 can be reduced when 'green' biomass-derived transportation fuels are used. One of the most promising routes to produce green fuels is the combination of biomass gasification (BG) and Fischer-Tropsch (FT) synthesis, wherein biomass is gasified and after cleaning the biosyngas is used for FT synthesis to produce long-chain hydrocarbons that are converted into ‘green diesel’. To demonstrate this route, a small FT unit based on Shell technology was operated for in total 650 hours on biosyngas produced by gasification of willow. In the investigated system, tars were removed in a high-temperature tar cracker and other impurities, like NH3 and H2S were removed via wet scrubbing followed by active-carbon and ZnO filters. The experimental work and the supporting system analysis afforded important new insights on the desired gas cleaning and the optimal line-up for biomass gasification processes with a maximised conversion to FT liquids. Two approaches were considered: a front-end approach with reference to the (small) scale of existing CFB gasifiers (1-100 M Wth) and a back-end approach with reference to the desired (large) scale for FT synthesis (500-1000 MWth). In general, the sum of H2 and CO in the raw biosyngas is an important parameter, whereas the H2/CO ratio is less relevant. BTX (i.e . benzene, toluene, and xylenes) are the design guideline for the gas cleaning and with this the tar issue is de-facto solved (as tars are easier to remove than BTX). To achieve high yields of FT products the presence of a tar cracker in the system is required. Oxygen gasification allows a further increase in yield of FT products as a N2-free gas is required for off-gas recycling. The scale of the BG-FT installation determines the line-up of the gas cleaning and the integrated process. It is expected that the future of BG-FT systems will be large plants with pressurised oxygen blown gasifiers and maximised Fischer-Tropsch synthesis.",
"title": ""
},
{
"docid": "ff7c790af7eaaea4bf3a354d21fd9189",
"text": "Among the large number of contributions concerning the localization techniques for wireless sensor networks (WSNs), there is still no simple, energy and cost efficient solution suitable in outdoor scenarios. In this paper, a technique based on antenna arrays and angle-ofarrival (AoA) measurements is carefully discussed. While the AoA algorithms are rarely considered for WSNs due to the large dimensions of directional antennas, some system configurations are investigated that can be easily incorporated in pocket-size wireless devices. A heuristic weighting function that enables decreasing the location errors is introduced. Also, the detailed performance analysis of the presented system is provided. The localization accuracy is validated through realistic Monte-Carlo simulations that take into account the specificity of propagation conditions in WSNs as well as the radio noise effects. Finally, trade-offs between the accuracy, localization time and the number of anchors in a network are addressed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a6d550a64dc633e50ee2b21255344e7b",
"text": "Sentiment classification is a much-researched field that identifies positive or negative emotions in a large number of texts. Most existing studies focus on document-based approaches and documents are represented as bag-of word. Therefore, this feature representation fails to obtain the relation or associative information between words and it can't distinguish different opinions of a sentiment word with different targets. In this paper, we present a dependency tree-based sentence-level sentiment classification approach. In contrast to a document, a sentence just contains little information and a small set of features which can be used for the sentiment classification. So we not only capture flat features (bag-of-word), but also extract structured features from the dependency tree of a sentence. We propose a method to add more information to the dependency tree and provide an algorithm to prune dependency tree to reduce the noisy, and then introduce a convolution tree kernel-based approach to the sentence-level sentiment classification. The experimental results show that our dependency tree-based approach achieved significant improvement, particularly for implicit sentiment classification.",
"title": ""
}
] |
scidocsrr
|
cce73ff6b2aed88e7cc7cccdd2a4e8cb
|
Integrated Anomaly Detection for Cyber Security of the Substations
|
[
{
"docid": "480f940bf5a2226b659048d9840582d9",
"text": "Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power systems. The purpose is to study the impact of a cyber attack on supervisory control and data acquisition (SCADA) systems. Compliance of the requirement to meet the standard has become increasingly challenging as the system becomes more dispersed in wide areas. Interdependencies between computer communication system and the physical infrastructure also become more complex as information technologies are further integrated into devices and networks. This paper proposes a vulnerability assessment framework to systematically evaluate the vulnerabilities of SCADA systems at three levels: system, scenarios, and access points. The proposed method is based on cyber systems embedded with the firewall and password models, the primary mode of protection in the power industry today. The impact of a potential electronic intrusion is evaluated by its potential loss of load in the power system. This capability is enabled by integration of a logic-based simulation method and a module for the power flow computation. The IEEE 30-bus system is used to evaluate the impact of attacks launched from outside or from within the substation networks. Countermeasures are identified for improvement of the cybersecurity.",
"title": ""
}
] |
[
{
"docid": "f7fc47986046f9d02f9b89f244341123",
"text": "Incorporating the body dynamics of compliant robots into their controller architectures can drastically reduce the complexity of locomotion control. An extreme version of this embodied control principle was demonstrated in highly compliant tensegrity robots, for which stable gait generation was achieved by using only optimized linear feedback from the robot's sensors to its actuators. The morphology of quadrupedal robots has previously been used for sensing and for control of a compliant spine, but never for gait generation. In this paper, we successfully apply embodied control to the compliant, quadrupedal Oncilla robot. As initial experiments indicated that mere linear feedback does not suffice, we explore the minimal requirements for robust gait generation in terms of memory and nonlinear complexity. Our results show that a memoryless feedback controller can generate a stable trot by learning the desired nonlinear relation between the input and the output signals. We believe this method can provide a very useful tool for transferring knowledge from open loop to closed loop control on compliant robots.",
"title": ""
},
{
"docid": "eda1de3b80af48130b6c90858d62e4fd",
"text": "Social media analytics (SMA) is a rapidly emerging capability that provides organisations with the ability to analyse and interpret large amounts of online content to determine the attitudes and behaviours of people. The adoption and impact of SMA by businesses is still largely unexplored. In this paper we develop a framework, based on organisational motivation theory and the resourcebased view that explains how SMA can bring benefits to organisations. The framework includes three key concepts: organisational motivations, SMA capabilities and benefits. The framework is developed from a synthesis of relevant literature and an analysis of 40 success stories published by SMA vendors. The framework provides a ranked catalogue of clearly defined motivations, SMA capabilities and benefits. It provides researchers with a theoretically grounded base for understanding how SMA impacts organisations, and provides a useful starting point for future empirical research. For practitioners, the framework provides a systematic means of understanding how SMA might be used to bring benefits.",
"title": ""
},
{
"docid": "42bd08ed5a65d2b16e6a94708e88f0ed",
"text": "Designers of distributed embedded systems face many challenges in determining the tradeoffs when defining a system architecture or retargeting an existing design. Communication synthesis, the automatic generation of the necessary software and hardware for system components to exchange data, is required to more effectively explore the design space and automate very error prone tasks. The paper examines the problem of mapping a high level specification to an arbitrary architecture that uses specific, common bus protocols for interprocessor communication. The communication model presented allows for easy retargeting to different bus topologies, protocols, and illustrates that global considerations are required to achieve a correct implementation. An algorithm is presented that partitions multihop communication timing constraints to effectively utilize the bus bandwidth along a message path. The communication synthesis tool is integrated with a system co-simulator to provide performance data for a given mapping.",
"title": ""
},
{
"docid": "174fb8b7cb0f45bed49a50ce5ad19c88",
"text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: haiqiu@uwm.edu (H. Qiu), jaylee@uwm.edu (J. Lee), jinglin@mail.ioc.ac.cn (J. Lin).",
"title": ""
},
{
"docid": "1b04911f677767284063133908ab4bb1",
"text": "An increasing number of companies are beginning to deploy services/applications in the cloud computing environment. Enhancing the reliability of cloud service has become a critical and challenging research problem. In the cloud computing environment, all resources are commercialized. Therefore, a reliability enhancement approach should not consume too much resource. However, existing approaches cannot achieve the optimal effect because of checkpoint image-sharing neglect, and checkpoint image inaccessibility caused by node crashing. To address this problem, we propose a cloud service reliability enhancement approach for minimizing network and storage resource usage in a cloud data center. In our proposed approach, the identical parts of all virtual machines that provide the same service are checkpointed once as the service checkpoint image, which can be shared by those virtual machines to reduce the storage resource consumption. Then, the remaining checkpoint images only save the modified page. To persistently store the checkpoint image, the checkpoint image storage problem is modeled as an optimization problem. Finally, we present an efficient heuristic algorithm to solve the problem. The algorithm exploits the data center network architecture characteristics and the node failure predicator to minimize network resource usage. To verify the effectiveness of the proposed approach, we extend the renowned cloud simulator Cloudsim and conduct experiments on it. Experimental results based on the extended Cloudsim show that the proposed approach not only guarantees cloud service reliability, but also consumes fewer network and storage resources than other approaches.",
"title": ""
},
{
"docid": "1fb13cda340d685289f1863bb2bfd62b",
"text": "1 Assistant Professor, Department of Prosthodontics, Ibn-e-Siena Hospital and Research Institute, Multan Medical and Dental College, Multan, Pakistan 2 Assistant Professor, Department of Prosthodontics, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 3 Head Department of Prosthodontics, Armed Forces Institute of Dentistry, Rawalpindi, Pakistan For Correspondence: Dr Salman Ahmad, House No 10, Street No 2, Gulshan Sakhi Sultan Colony, Surej Miani Road, Multan, Pakistan. Email: drsalman21@gmail.com. Cell: 0300–8732017 INTRODUCTION",
"title": ""
},
{
"docid": "ccf7390abc2924e4d2136a2b82639115",
"text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.",
"title": ""
},
{
"docid": "2b1a9bc5ae7e9e6c2d2d008e2a2384b5",
"text": "Network information distribution is a fundamental service for any anonymization network. Even though anonymization and information distribution about the network are two orthogonal issues, the design of the distribution service has a direct impact on the anonymization. Requiring each node to know about all other nodes in the network (as in Tor and AN.ON -- the most popular anonymization networks) limits scalability and offers a playground for intersection attacks. The distributed designs existing so far fail to meet security requirements and have therefore not been accepted in real networks.\n In this paper, we combine probabilistic analysis and simulation to explore DHT-based approaches for distributing network information in anonymization networks. Based on our findings we introduce NISAN, a novel approach that tries to scalably overcome known security problems. It allows for selecting nodes uniformly at random from the full set of all available peers, while each of the nodes has only limited knowledge about the network. We show that our scheme has properties similar to a centralized directory in terms of preventing malicious nodes from biasing the path selection. This is done, however, without requiring to trust any third party. At the same time our approach provides high scalability and adequate performance. Additionally, we analyze different design choices and come up with diverse proposals depending on the attacker model. The proposed combination of security, scalability, and simplicity, to the best of our knowledge, is not available in any other existing network information distribution system.",
"title": ""
},
{
"docid": "34992b86a8ac88c5f5bbca770954ae61",
"text": "Entity search over text corpora is not geared for relationship queries where answers are tuples of related entities and where a query often requires joining cues from multiple documents. With large knowledge graphs, structured querying on their relational facts is an alternative, but often suffers from poor recall because of mismatches between user queries and the knowledge graph or because of weakly populated relations.\n This paper presents the TriniT search engine for querying and ranking on extended knowledge graphs that combine relational facts with textual web contents. Our query language is designed on the paradigm of SPO triple patterns, but is more expressive, supporting textual phrases for each of the SPO arguments. We present a model for automatic query relaxation to compensate for mismatches between the data and a user's query. Query answers -- tuples of entities -- are ranked by a statistical language model. We present experiments with different benchmarks, including complex relationship queries, over a combination of the Yago knowledge graph and the entity-annotated ClueWeb'09 corpus.",
"title": ""
},
{
"docid": "8d071dbd68902f3bac18e61caa0828dd",
"text": "This paper demonstrates that it is possible to construct the Stochastic flash ADC using standard digital cells. In order to minimize the analog circuit requirements which cost high, it is appropriate to begin the architecture with highly digital. The proposed Stochastic flash ADC uses a random comparator offset to set the trip points. Since the comparator are no longer sized for small offset, they can be shrunk down into digital cells. Using comparators that are implemented as digital cells produces a large variation of comparator offset. Typically, this is considered a disadvantage, but in our case, this large standard deviation of offset is used to set the input signal range. By designing an ADC that is made up entirely of digital cells, it is natural candidate for a synthesizable ADC. The analog comparator which is used in this ADC is constructed from standard digital NAND gates connected with SR latch to minimize the memory effects. A Wallace tree adder is used to sum the total number of comparator output, since the order of comparator output is random. Thus, all the components including the comparator and Wallace tree adder can be implemented using standard digital cells. [1] INTRODUCTION As CMOS designs are scaled to smaller technology nodes, many benefits arise, as well as challenges. There are benefits in speed and power due to decreased capacitance and lower supply voltage, yet reduction in intrinsic device gain and lower supply voltage make it difficult to migrate previous analog designs to smaller scaled processes. Moreover, as scaling trends continue, the analog portion of a mixed-signal system tends to consume proportionally more power and area and have a higher design cost than the digital counterpart. This tends to increase the overall design cost of the mixed-signal design. Automatically synthesized digital circuits get all the benefits of scaling, but analog circuits get these benefits at a large cost. The most essential component of ADC is the comparator, which translates from the analog world to digital world. Since comparator defines the boundary between analog and digital realms, the flash ADC architecture will be considered, as it places the comparator as close to the analog input signal. Flash ADCs use a reference ladder to generate the comparator trip points that correspond to each digital code. Typically the references are either generated by a resistor ladder or some form of analog interpolation, but the effect is the same: a …",
"title": ""
},
{
"docid": "897fb39d295defc4b6e495236a2c74b1",
"text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.",
"title": ""
},
{
"docid": "528fccc8044c8d7e3dfce674af4aae8e",
"text": "One of the basic tasks for autonomous flight with aerial vehicles (drones) is the detection of obstacles within its flight environment. As the technology develops and becomes more robust, drones will become part of the toolkit to aid maintenance repair and operation (MRO) and ground personnel at airports. Currently laser technology is the primary means for obstacle detection as it provides high resolution and long range. The high intensity laser beam can result in temporary blindness for pilots when the beam targets the windscreen of aircraft on the ground or on final approach within the vicinity of the airport. An alternative is ultrasonic sensor technology, but this suffers from poor angular resolution. In this paper we present a solution using time-of-flight (TOF) data from ultrasonic sensors. This system uses a single commercial 40 kHz combined transmitter/ receiver which returns the distance to the nearest obstacle in its field of view, +/30 degrees given the speed of sound in air at ambient temperature. Two sonar receivers located either side of the transmitter / receiver are mounted on a horizontal rotating shaft. Rotation of this shaft allows for separate sonar observations at regular intervals which cover the field of view of the transmitter / receiver. To reduce the sampling frequency an envelope detector is used prior to the analogue-digital-conversion for each of the sonar channels. A scalar Kalman filter for each channel reduces the effects of signal noise by providing real time filtering (Drongelen, 2017a). Four signal metrics are used to determine the location of the obstacle in the sensors field of view: 1. Maximum (Peak) frequency 2. Cross correlation of raw data and PSD 3. Power Spectral Density 4. Energy Spectral Density Results obtained in an actual indoor environment are presented to support the validity of the proposed algorithm. © 2017 The Authors. Published by Elsevier Ltd. Peer-review under responsibility of the scientific committee of the INAIR 2017. * Corresponding author. Tel.: +353 59 9175459. E-mail address: gibbsg@itcarlow.ie. Available online at www.sciencedirect.com",
"title": ""
},
{
"docid": "d455f379442de99caaccc312737546df",
"text": "Research suggests that rumination increases anger and aggression. Mindfulness, or present-focused and intentional awareness, may counteract rumination. Using structural equation modeling, we examined the relations between mindfulness, rumination, and aggression. In a pair of studies, we found a pattern of correlations consistent with rumination partially mediating a causal link between mindfulness and hostility, anger, and verbal aggression. The pattern was not consistent with rumination mediating the association between mindfulness and physical aggression. Although it is impossible with the current nonexperimental data to test causal mediation, these correlations support the idea that mindfulness could reduce rumination, which in turn could reduce aggression. These results suggest that longitudinal work and experimental manipulations mindfulness would be worthwhile approaches for further study of rumination and aggression. We discuss possible implications of these results.",
"title": ""
},
{
"docid": "8e5147258f806ebc25d9a60b1fc3f8ee",
"text": "Existing inefficient traffic light cycle control causes numerous problems, such as long delay and waste of energy. To improve efficiency, taking real-time traffic information as an input and dynamically adjusting the traffic light duration accordingly is a must. Existing works either split the traffic signal into equal duration or only leverage limited traffic information. In this paper, we study how to decide the traffic signal duration based on the collected data from different sensors. We propose a deep reinforcement learning model to control the traffic light cycle. In the model, we quantify the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The reward is the cumulative waiting time difference between two cycles. To solve the model, a convolutional neural network is employed to map states to rewards. The proposed model incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay. We evaluate our model via simulation on a Simulation of Urban MObility simulator. Simulation results show the efficiency of our model in controlling traffic lights.",
"title": ""
},
{
"docid": "b7670e3d63c14da9f4b2e0ee616d3f13",
"text": "In this paper, we present a Character-Aware Neural Network (Char-Net) for recognizing distorted scene text. Our CharNet is composed of a word-level encoder, a character-level encoder, and a LSTM-based decoder. Unlike previous work which employed a global spatial transformer network to rectify the entire distorted text image, we take an approach of detecting and rectifying individual characters. To this end, we introduce a novel hierarchical attention mechanism (HAM) which consists of a recurrent RoIWarp layer and a characterlevel attention layer. The recurrent RoIWarp layer sequentially extracts a feature region corresponding to a character from the feature map produced by the word-level encoder, and feeds it to the character-level encoder which removes the distortion of the character through a simple spatial transformer and further encodes the character region. The character-level attention layer then attends to the most relevant features of the feature map produced by the characterlevel encoder and composes a context vector, which is finally fed to the LSTM-based decoder for decoding. This approach of adopting a simple local transformation to model the distortion of individual characters not only results in an improved efficiency, but can also handle different types of distortion that are hard, if not impossible, to be modelled by a single global transformation. Experiments have been conducted on six public benchmark datasets. Our results show that CharNet can achieve state-of-the-art performance on all the benchmarks, especially on the IC-IST which contains scene text with large distortion. Code will be made available.",
"title": ""
},
{
"docid": "ded8b8390c3f74473feb35d6af45ec00",
"text": "Overwhelming evidence supports the importance of sleep for memory consolidation. Medical students are often deprived of sufficient sleep due to large amounts of clinical duties and university load, we therefore investigated how study and sleep habits influence university performance. We performed a questionnaire-based study with 31 medical students of the University of Munich (second and third clinical semesters; surgery and internal medicine). The students kept a diary (in 30-min bins) on their daily schedules (times when they studied by themselves, attended classes, slept, worked on their thesis, or worked to earn money). The project design involved three 2-wk periods (A: during the semester; B: directly before the exam period--pre-exam; C: during the subsequent semester break). Besides the diaries, students completed once questionnaires about their sleep quality (Pittsburgh Sleep Quality Index [PSQI]), their chronotype (Munich Chronotype Questionnaire [MCTQ]), and their academic history (previous grades, including the previously achieved preclinical board exam [PBE]). Analysis revealed significant correlations between the actual sleep behavior during the semester (MS(diary); mid-sleep point averaged from the sleep diaries) during the pre-exam period and the achieved grade (p = 0.002) as well as between the grades of the currently taken exam and the PBE (p = 0.002). A regression analysis with MS(diary) pre-exam and PBE as predictors in a model explained 42.7% of the variance of the exam grade (effect size 0.745). Interestingly, MS(diary)--especially during the pre-exam period-was the strongest predictor for the currently achieved grade, along with the preclinical board exam as a covariate, whereas the chronotype did not significantly influence the exam grade.",
"title": ""
},
{
"docid": "920d47a0f133dc41ba695c97bdd91ed8",
"text": "Sleep and wakefulness are regulated to occur at appropriate times that are in accordance with our internal and external environments. Avoiding danger and finding food, which are life-essential activities that are regulated by emotion, reward and energy balance, require vigilance and therefore, by definition, wakefulness. The orexin (hypocretin) system regulates sleep and wakefulness through interactions with systems that regulate emotion, reward and energy homeostasis.",
"title": ""
},
{
"docid": "16b5c5d176f2c9292d9c9238769bab31",
"text": "We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.",
"title": ""
},
{
"docid": "7c9d35fb9cec2affbe451aed78541cef",
"text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.",
"title": ""
},
{
"docid": "f25b62f8c9f361cb5c7b3f47614b02ad",
"text": "This study focuses on CS minor students' decisions to drop out from the CS1 course. The high level of drop out percentage has been a problem at Helsinki University of Technology for many years. This course has yearly enrolment of 500-600 students and the drop out percentage has varied from 30-50 percents.Since we did not have clear picture of drop out reasons we conducted a qualitative interview research in which 18 dropouts from the CS1 course were interviewed. The reasons of drop out were categorized and, in addition, each case was investigated individually. This procedure enabled us to both list the reasons and to reveal the cumulative nature of drop out reasons.The results indicate that several reasons affect students' decision to quit the CS1 course. The most frequent reasons were the lack of time and the lack of motivation. However, both of these reasons were in turn affected by factors, such as the perceived difficulty of the course, general difficulties with time managing and planning studies, or the decision to prefer something else. Furthermore, low comfort level and plagiarism played a role in drop out. In addition, drop out reasons cumulated.This study shows that the complexity and large variety of factors involved in students' decision to drop the course. This indicates that simple actions to improve teaching or organization on a CS1 course to reduce drop out may be ineffective. Efficient intervention to the problem apparently requires a combination of many different actions that take into consideration the versatile nature of reasons involved in drop out.",
"title": ""
}
] |
scidocsrr
|
f6047a528d25c4d52d310ffcc641c731
|
An approach for detection and family classification of malware based on behavioral analysis
|
[
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "f1ce50e0b787c1d10af44252b3a7e656",
"text": "This paper proposes a scalable approach for distinguishing malicious files from clean files by investigating the behavioural features using logs of various API calls. We also propose, as an alternative to the traditional method of manually identifying malware files, an automated classification system using runtime features of malware files. For both projects, we use an automated tool running in a virtual environment to extract API call features from executables and apply pattern recognition algorithms and statistical methods to differentiate between files. Our experimental results, based on a dataset of 1368 malware and 456 cleanware files, provide an accuracy of over 97% in distinguishing malware from cleanware. Our techniques provide a similar accuracy for classifying malware into families. In both cases, our results outperform comparable previously published techniques.",
"title": ""
},
{
"docid": "f5d769be1305755fe0753d1e22cbf5c9",
"text": "The number of malware is increasing rapidly and a lot of malware use stealth techniques such as encryption to evade pattern matching detection by anti-virus software. To resolve the problem, behavior based detection method which focuses on malicious behaviors of malware have been researched. Although they can detect unknown and encrypted malware, they suffer a serious problem of false positives against benign programs. For example, creating files and executing them are common behaviors performed by malware, however, they are also likely performed by benign programs thus it causes false positives. In this paper, we propose a malware detection method based on evaluation of suspicious process behaviors on Windows OS. To avoid false positives, our proposal focuses on not only malware specific behaviors but also normal behavior that malware would usually not do. Moreover, we implement a prototype of our proposal to effectively analyze behaviors of programs. Our evaluation experiments using our malware and benign program datasets show that our malware detection rate is about 60% and it does not cause any false positives. Furthermore, we compare our proposal with completely behavior-based anti-virus software. Our results show that our proposal puts few burdens on users and reduces false positives.",
"title": ""
}
] |
[
{
"docid": "f2a677515866e995ff8e0e90561d7cbc",
"text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.",
"title": ""
},
{
"docid": "d73af831462af9ea510fb9a00c152ab6",
"text": "Cloud computing is a new paradigm for using ICT services— only when needed and for as long as needed, and paying only for service actually consumed. Benchmarking the increasingly many cloud services is crucial for market growth and perceived fairness, and for service design and tuning. In this work, we propose a generic architecture for benchmarking cloud services. Motivated by recent demand for data-intensive ICT services, and in particular by processing of large graphs, we adapt the generic architecture to Graphalytics, a benchmark for distributed and GPU-based graph analytics platforms. Graphalytics focuses on the dependence of performance on the input dataset, on the analytics algorithm, and on the provisioned infrastructure. The benchmark provides components for platform configuration, deployment, and monitoring, and has been tested for a variety of platforms. We also propose a new challenge for the process of benchmarking data-intensive services, namely the inclusion of the data-processing algorithm in the system under test; this increases significantly the relevance of benchmarking results, albeit, at the cost of increased benchmarking duration.",
"title": ""
},
{
"docid": "efbaec32e42bdb9f12341d6be588a985",
"text": "Bacterial quorum sensing (QS) is a density dependent communication system that regulates the expression of certain genes including production of virulence factors in many pathogens. Bioactive plant extract/compounds inhibiting QS regulated gene expression may be a potential candidate as antipathogenic drug. In this study anti-QS activity of peppermint (Mentha piperita) oil was first tested using the Chromobacterium violaceum CVO26 biosensor. Further, the findings of the present investigation revealed that peppermint oil (PMO) at sub-Minimum Inhibitory Concentrations (sub-MICs) strongly interfered with acyl homoserine lactone (AHL) regulated virulence factors and biofilm formation in Pseudomonas aeruginosa and Aeromonas hydrophila. The result of molecular docking analysis attributed the QS inhibitory activity exhibited by PMO to menthol. Assessment of ability of menthol to interfere with QS systems of various Gram-negative pathogens comprising diverse AHL molecules revealed that it reduced the AHL dependent production of violacein, virulence factors, and biofilm formation indicating broad-spectrum anti-QS activity. Using two Escherichia coli biosensors, MG4/pKDT17 and pEAL08-2, we also confirmed that menthol inhibited both the las and pqs QS systems. Further, findings of the in vivo studies with menthol on nematode model Caenorhabditis elegans showed significantly enhanced survival of the nematode. Our data identified menthol as a novel broad spectrum QS inhibitor.",
"title": ""
},
{
"docid": "5cb970d7a207865ed0048fd20ce5fff2",
"text": "Effective evaluation is necessary in order to ensure systems adequately meet the requirements and information processing needs of the users and scope of the system. Technology acceptance model is one of the most popular and effective models for evaluation. A number of studies have proposed evaluation frameworks to aid in evaluation work. The end users for evaluation the acceptance of new technology or system have a lack of knowledge to examine and evaluate some features in the new technology/system. This will give a fake evaluation results of the new technology acceptance. This paper proposes a novel evaluation model to evaluate user acceptance of software and system technology by modifying the dimensions of the Technology Acceptance Model (TAM) and added additional success dimension for expert users. The proposed model has been validated by an empirical study based on a questionnaire. The results indicated that the expert users have a strong significant influence to help in evaluation and pay attention to some features that end users have lack of knowledge to evaluate it.",
"title": ""
},
{
"docid": "eabb50988aeb711995ff35833a47770d",
"text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.",
"title": ""
},
{
"docid": "7057a9c1cedafe1fca48b886afac20d3",
"text": "In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels.",
"title": ""
},
{
"docid": "e723f76f4c9b264cbf4361b72c7cbf10",
"text": "With the constant growth in Information and Communication Technology (ICT) in the last 50 years or so, electronic communication has become part of the present day system of living. Equally, smileys or emoticons were innovated in 1982, and today the genre has attained a substantial patronage in various aspects of computer-mediated communication (CMC). Ever since written forms of electronic communication lack the face-to-face (F2F) situation attributes, emoticons are seen as socio-emotional suppliers to the CMC. This article reviews scholarly research in that field in order to compile variety of investigations on the application of emoticons in some facets of CMC, i.e. Facebook, Instant Messaging (IM), and Short Messaging Service (SMS). Key findings of the review show that emoticons do not just serve as paralanguage elements rather they are compared to word morphemes with distinctive significative functions. In other words, they are morpheme-like units and could be derivational, inflectional, or abbreviations but not unbound. The findings also indicate that emoticons could be conventionalized as well as being paralinguistic elements, therefore, they should be approached as contributory to conversation itself not mere compensatory to language.",
"title": ""
},
{
"docid": "9809596697119fb50978470aaec837d6",
"text": "Tuning of PID controller parameters is one of the usual tasks of the control engineers due to the wide applications of this class of controllers in industry. In this paper the Iterative Feedback Tuning (IFT) method is applied to tune the PID parameters. The main advantage of this method is that there is no need to the model of the system, so that is useful in many processes which there is no obvious model of the system. In many cases this feature can be so useful in tuning the controller parameters. The IFT is applied here to tune the PID parameters. Speed control of DC motor was employed to demonstrate the effectiveness of the method. The results is compared with other tuning methods and represented the good performance of the designed controller. As it is shown, the step response of the system controlled by PID tuned with IFT has more robustness and performs well.",
"title": ""
},
{
"docid": "9b2291ef3e605d85b6d0dba326aa10ef",
"text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.",
"title": ""
},
{
"docid": "379138e53ed204ff46b657185ff86368",
"text": "Human pose-estimation in a multi-person image involves detection of various body parts and grouping them into individual person clusters. While the former task is challenging due to mutual occlusions, the combinatorial complexity of the latter task is very high. We propose a greedy part assignment algorithm that exploits the inherent structure of the human body to lower the complexity of the graphical model, compared to any of the prior published works. This is accomplished by (i) reducing the number of part-candidates using the estimated number of people in the image, (ii) doing a greedy sequential assignment of partclasses, following the kinematic chain from head to ankle (iii) doing a greedy assignment of parts in each part-class set, to person-clusters (iv) limiting the candidate person clusters to the most proximal clusters using human anthropometric data and (v) using only a specific subset of pre-assigned parts for establishing pairwise structural constraints. We show that, these steps sparsify the bodyparts relationship graph and reduces the algorithm's complexity to be linear in the number of candidates of any single part-class. We also propose a method for spawning person-clusters from any unassigned significant body part to make the algorithm robust to occlusions. We show that, our proposed part-assignment algorithm, despite using a sub-optimal pre-trained DNN model, achieves state of the art results on both MPII and WAF pose datasets, demonstrating the robustness of our approach.",
"title": ""
},
{
"docid": "3a0275d7834a6fb1359bb7d3bef14e97",
"text": "With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve quality of service (QoS) in IoT networks is becoming a challenging problem. Currently most interaction between the IoT devices and the supporting back-end servers is done through large scale cloud data centers. However, with the exponential growth of IoT devices and the amount of data they produce, communication between \"things\" and cloud will be costly, inefficient, and in some cases infeasible. Fog computing serves as solution for this as it provides computation, storage, and networking resource for IoT, closer to things and users. One of the promising advantages of fog is reducing service delay for end user applications, whereas cloud provides extensive computation and storage capacity with a higher latency. Thus it is necessary to understand the interplay between fog computing and cloud, and to evaluate the effect of fog computing on the IoT service delay and QoS. In this paper we will introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.",
"title": ""
},
{
"docid": "ce13d49ba27d33db28fd5aaf991b2214",
"text": "The performance of a standard model predictive controller (MPC) is directly related to its predictive model. If there are unmodeled periodic disturbances in the actual system, MPC will be difficult to suppress the disturbances, thus causing fluctuations of system output. To solve this problem, this paper proposes an improved MPC named predictive-integral-resonant control (PIRC). Compared with the standard MPC, the proposed PIRC could enhance the suppression ability for disturbances by embedding the internal model composing of the integral and resonant loop. Furthermore, this paper applies the proposed PIRC to PMSM drives, and proposes the PMSM control strategy based on the cascaded PIRC, which could suppress periodic disturbances caused by the dead time effects, current sampling errors, and so on. The experimental results show that the PIRC can suppress periodic disturbances in the drive system, thus ensuring good current and speed performance. Meanwhile, the PIRC could maintain the excellent dynamic performance as the standard MPC.",
"title": ""
},
{
"docid": "e48da0cf3a09b0fd80f0c2c01427a931",
"text": "Timely analysis of information in cybersecurity necessitates automated information extraction from unstructured text. Unfortunately, state-of-the-art extraction methods require training data, which is unavailable in the cyber-security domain. To avoid the arduous task of handlabeling data, we develop a very precise method to automatically label text from several data sources by leveraging article-specific structured data and provide public access to corpus annotated with cyber-security entities. We then prototype a maximum entropy model that processes this corpus of auto-labeled text to label new sentences and present results showing the Collins Perceptron outperforms the MLE with LBFGS and OWL-QN optimization for parameter fitting. The main contribution of this paper is an automated technique for creating a training corpus from text related to a database. As a multitude of domains can benefit from automated extraction of domain-specific concepts for which no labeled data is available, we hope our solution is widely applicable.",
"title": ""
},
{
"docid": "3f418dd3a1374a7928e2428aefe4fe29",
"text": "The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "15fa73633d6ec7539afc91bb1f45098f",
"text": "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.",
"title": ""
},
{
"docid": "45f120b05b3c48cd95d5dd55031987cb",
"text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.",
"title": ""
},
{
"docid": "b1d00c44127956ab703204490de0acd7",
"text": "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.",
"title": ""
},
{
"docid": "30799ad2796b9715fb70be87438edf64",
"text": "This study investigated the impact of introducing the Klein-Bell ADL Scale into a rehabilitation medicine service. A pretest and a posttest questionnaire of rehabilitation team members and a pretest and a posttest audit of occupational therapy documentation were completed. Results of the questionnaire suggested that the ADL scale influenced rehabilitation team members' observations in the combined area of occupational therapy involvement in self-care, improvement in the identification of treatment goals and plans, and communication between team members. Results of the audit suggested that the thoroughness and quantification of occupational therapy documentation improved. The clinical implications of these findings recommend the use of the Klein-Bell ADL Scale in rehabilitation services for improving occupational therapy documentation and for enhancing rehabilitation team effectiveness.",
"title": ""
},
{
"docid": "ca7deb4d72ceb8325861724722345a61",
"text": "a r t i c l e i n f o Synthesizing prior research, this paper designs a relatively comprehensive and holistic characterization of business analytics – one that serves as a foundation on which researchers, practitioners, and educators can base their studies of business analytics. As such, it serves as an initial ontology for business analytics as a field of study. The foundation has three main parts dealing with the whence and whither of business analytics: identification of dimensions along which business analytics possibilities can be examined, derivation of a six-class taxonomy that covers business analytics perspectives in the literature, and design of an inclusive framework for the field of business analytics. In addition to unifying the literature, a major contribution of the designed framework is that it can stimulate thinking about the nature, roles, and future of business analytics initiatives. We show how this is done by deducing a host of unresolved issues for consideration by researchers, practitioners, and educators. We find that business analytics involves issues quite aside from data management, number crunching, technology use, systematic reasoning, and so forth. According to a study by Gartner, the technology category of \" analyt-ics and business intelligence \" is the top priority of chief information officers, and comprises a $12.2B market [1]. It is seen as a higher priority than such categories as mobile technology, cloud computing, and collaboration technology. Further, Gartner finds that the top technology priority of chief financial officers is analytics [2]. Similarly, in studies involving interviews with thousands of chief information officers, worldwide, IBM asked, \" which visionary plan do you have to increase competitiveness over the next 3 to 5 years? \" In both 2011 and 2009, 83% of respondents identify \" Business Intelligence and Analytics \" as their number-one approach for achieving greater competitiveness. Among all types of plans, this is the top percentage for both years. To put this in perspective, consider 2011 results, in which business intelligence and analytics exceeds such other competitiveness plans as mobility solutions (ranked 2nd at74%), cloud computing (ranked 4th at 60%), and social networking (ranked 8th at 55%) [3]. IDC reports that the business analytics software market grew by 13.8% during 2011 to $32B, and predicts it to be at $50.7B in revenue by 2016 [4,5]. It appears that a driver for this growth is the perception or realization that such investments yield value. Across a …",
"title": ""
},
{
"docid": "b11c04a5aacac0d369c636b1fad47570",
"text": "Draft of textbook chapter on neural machine translation. a comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant attentional sequence-to-sequence model, recent refinements, alternative architectures and challenges. Written as chapter for the textbook Statistical Machine Translation. Used in the JHU Fall 2017 class on machine translation.",
"title": ""
}
] |
scidocsrr
|
d595ef58f73a030560796a1205340d7c
|
Network of Information (NetInf) - An information-centric networking architecture
|
[
{
"docid": "1b3afef7a857d436635a3de056559e1f",
"text": "This paper presents Haggle, an architecture for mobile devices that enables seamless network connectivity and application functionality in dynamic mobile environments. Current applications must contain significant network binding and protocol logic, which makes them inflexible to the dynamic networking environments facing mobile devices. Haggle allows separating application logic from transport bindings so that applications can be communication agnostic. Internally, the Haggle framework provides a mechanism for late-binding interfaces, names, protocols, and resources for network communication. This separation allows applications to easily utilize multiple communication modes and methods across infrastructure and infrastructure-less environments. We provide a prototype implementation of the Haggle framework and evaluate it by demonstrating support for two existing legacy applications, email and web browsing. Haggle makes it possible for these applications to seamlessly utilize mobile networking opportunities both with and without infrastructure.",
"title": ""
}
] |
[
{
"docid": "5d7372b7b66ff94203d492dc92c792b1",
"text": "Trust in others’ honesty is a key component of the long-term performance of firms, industries, and even whole countries. However, in recent years, numerous scandals involving fraud have undermined confidence in the financial industry. Contemporary commentators have attributed these scandals to the financial sector’s business culture, but no scientific evidence supports this claim. Here we show that employees of a large, international bank behave, on average, honestly in a control condition. However, when their professional identity as bank employees is rendered salient, a significant proportion of them become dishonest. This effect is specific to bank employees because control experiments with employees from other industries and with students show that they do not become more dishonest when their professional identity or bank-related items are rendered salient. Our results thus suggest that the prevailing business culture in the banking industry weakens and undermines the honesty norm, implying that measures to re-establish an honest culture are very important.",
"title": ""
},
{
"docid": "114d6c97f19bc29152ecda8fa2447f63",
"text": "The game of Bridge provides a number of research areas to AI researchers due to the many components that constitute the game. Bidding provides the subtle challenge of potential outcome maximization while learning through information gathering, but constrained to a limited rule set. Declarer play can be accomplished through planning and inference. Both the bidding and the play can also be accomplished through Monte Carlo analysis using a perfect information solver. Double-dummy play is a perfect information search, but over an enormous state-space, and thus requires α-β pruning, transposition tables and other tree-minimization techniques. As such, researchers have made much progress in each of these sub-fields over the years, particularly double-dummy play, but are yet to produce a consistent expert level player.",
"title": ""
},
{
"docid": "f82ca8db3c8183839e4a91f1fd6b45a9",
"text": "Recently, we developed a series of cytotoxic peptide conjugates containing 14-O-glutaryl esters of doxorubicin (DOX) or 2-pyrrolino-DOX (AN-201). Serum carboxylesterase enzymes (CE) can partially hydrolyze these conjugates in the circulation, releasing the cytotoxic radical, before the targeting is complete. CE activity in serum of nude mice is about 10 times higher than in human serum. Thus, we found that the t(1/2) of AN-152, an analog of luteinizing hormone-releasing hormone (LH-RH) containing DOX, at 0.3 mg/ml is 19. 49 +/- 0.74 min in mouse serum and 126.06 +/- 3.03 min in human serum in vitro. The addition of a CE inhibitor, diisopropyl fluorophosphate (DFP), to mouse serum in vitro significantly (P < 0. 01) prolongs the t(1/2) of AN-152 to 69.63 +/- 4.44 min. When DFP is used in vivo, 400 nmol/kg cytotoxic somatostatin analog AN-238 containing AN-201 is well tolerated by mice, whereas all animals die after the same dose without DFP. In contrast, DFP has no effect on the tolerance of AN-201. A better tolerance to AN-238 after DFP treatment is due to the selective uptake of AN-238 by somatostatin receptor-positive tissues. Our results demonstrate that the suppression of the CE activity in nude mice greatly decreases the toxicity of cytotoxic hybrids containing 2-pyrrolino-DOX 14-O-hemiglutarate and brings this animal model closer to the conditions that exist in humans. The use of DFP together with these peptide conjugates in nude mice permits a better understanding of their mechanism of action and improves the clinical predictability of the oncological and toxicological results.",
"title": ""
},
{
"docid": "bd5ffb0b8945077e46ef28e0e068a0a7",
"text": "Creating a collection of metadata records from disparate and diverse sources often results in uneven, unreliable and variable quality subject metadata. Having uniform, consistent and enriched subject metadata allows users to more easily discover material, browse the collection, and limit keyword search results by subject. We demonstrate how statistical topic models are useful for subject metadata enrichment. We describe some of the challenges of metadata enrichment on a huge scale (10 million metadata records from 700 repositories in the OAIster Digital Library) when the metadata is highly heterogeneous (metadata about images and text, and both cultural heritage material and scientific literature). We show how to improve the quality of the enriched metadata, using both manual and statistical modeling techniques. Finally, we discuss some of the challenges of the production environment, and demonstrate the value of the enriched metadata in a prototype portal.",
"title": ""
},
{
"docid": "0591acdb82c352362de74d6daef10539",
"text": "In this paper we report on our ongoing studies around the application of Augmented Reality methods to support the order picking process of logistics applications. Order picking is the gathering of goods out of a prepared range of items following some customer orders. We named the visual support of this order picking process using Head-mounted Displays “Pick-by-Vision”. This work presents the case study of bringing our previously developed Pickby-Vision system from the lab to an experimental factory hall to evaluate it under more realistic conditions. This includes the execution of two user studies. In the first one we compared our Pickby-Vision system with and without tracking to picking using a paper list to check picking performance and quality in general. In a second test we had subjects using the Pick-by-Vision system continuously for two hours to gain in-depth insight into the longer use of our system, checking user strain besides the general performance. Furthermore, we report on the general obstacles of trying to use HMD-based AR in an industrial setup and discuss our observations of user behaviour.",
"title": ""
},
{
"docid": "4fa68f011f7cb1b4874dd4b10070be17",
"text": "This paper demonstrates the development of ontology for satellite databases. First, I create a computational ontology for the Union of Concerned Scientists (UCS) Satellite Database (UCSSD for short), called the UCS Satellite Ontology (or UCSSO). Second, in developing UCSSO I show that The Space Situational Awareness Ontology (SSAO)-—an existing space domain reference ontology—-and related ontology work by the author (Rovetto 2015, 2016) can be used either (i) with a database-specific local ontology such as UCSSO, or (ii) in its stead. In case (i), local ontologies such as UCSSO can reuse SSAO terms, perform term mappings, or extend it. In case (ii), the author_s orbital space ontology work, such as the SSAO, is usable by the UCSSD and organizations with other space object catalogs, as a reference ontology suite providing a common semantically-rich domain model. The SSAO, UCSSO, and the broader Orbital Space Environment Domain Ontology project is online at https://purl.org/space-ontology and GitHub. This ontology effort aims, in part, to provide accurate formal representations of the domain for various applications. Ontology engineering has the potential to facilitate the sharing and integration of satellite data from federated databases and sensors for safer spaceflight.",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "13ffc17fe344471e96ada190493354d8",
"text": "The role of inflammation in the pathogenesis of type 2 diabetes and associated complications is now well established. Several conditions that are driven by inflammatory processes are also associated with diabetes, including rheumatoid arthritis, gout, psoriasis and Crohn's disease, and various anti-inflammatory drugs have been approved or are in late stages of development for the treatment of these conditions. This Review discusses the rationale for the use of some of these anti-inflammatory treatments in patients with diabetes and what we could expect from their use. Future immunomodulatory treatments may not target a specific disease, but could instead act on a dysfunctional pathway that causes several conditions associated with the metabolic syndrome.",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "787142b32caa7bc7e359e8f8dc2ae0d3",
"text": "Enthesitis is increasingly recognized as an important component in psoriatic arthritis (PsA). Improved imaging techniques have expanded our understanding of the role of enthesitis in PsA and provided methods for earlier detection and assessment. Increased knowledge about the extent of tendon and ligament involvement has led to the theory that enthesitis may be the primary event in PsA. Given the historical difficulties in detecting and measuring enthesitis, its inclusion as an endpoint in PsA trials has been limited. Current trial data suggest that tumour necrosis factor inhibitors can successfully treat PsA-related enthesitis, which may have implications for the long-term prognosis of PsA. In this article, we review methods for detecting and assessing enthesitis, current thinking regarding the role of enthesitis in the pathogenesis of PsA, and trial evidence for the treatment of PsA and, therefore, enthesitis.",
"title": ""
},
{
"docid": "1ebf2152d5624261951bebd68c306d5e",
"text": "A dual active bridge (DAB) is a zero-voltage switching (ZVS) high-power isolated dc-dc converter. The development of a 15-kV SiC insulated-gate bipolar transistor switching device has enabled a noncascaded medium voltage (MV) isolated dc-dc DAB converter. It offers simple control compared to a cascaded topology. However, a compact-size high frequency (HF) DAB transformer has significant parasitic capacitances for such voltage. Under high voltage and high dV/dT switching, the parasitics cause electromagnetic interference and switching loss. They also pose additional challenges for ZVS. The device capacitance and slowing of dV/dT play a major role in deadtime selection. Both the deadtime and transformer parasitics affect the ZVS operation of the DAB. Thus, for the MV-DAB design, the switching characteristics of the devices and MV HF transformer parasitics have to be closely coupled. For the ZVS mode, the current vector needs to be between converter voltage vectors with a certain phase angle defined by deadtime, parasitics, and desired converter duty ratio. This paper addresses the practical design challenges for an MV-DAB application.",
"title": ""
},
{
"docid": "26f957036ead7173f93ec16a57097a50",
"text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.",
"title": ""
},
{
"docid": "f2f43e7087d3506a848849b64b062954",
"text": "We present an Adaptive User Interface (AUI) for online courses in higher education as a method for solving the challenges posed by the different knowledge levels in a heterogeneous group of students. The scenario described in this paper is an online beginners' course in Mathematics which is extended by an adaptive course layout to better fit the needs of every individual student. The course offers an entry-level test to check each student's prior knowledge and skills. The results are used to automatically determine which parts of the course are relevant for the student and which ones can be hidden, based on parameters set by the course teachers. Initial results are promising; the new adaptive learning platform in mathematics is leading to higher student satisfaction and better performance.",
"title": ""
},
{
"docid": "0f8183f5781e26208da631978d0f610b",
"text": "Historically, games have been played between human opponents. However, with the advent of the computer came the notion that one might play with or against a computational surrogate. Dating back to the 1950s with early efforts in computer chess, approaches to game artificial intelligence (AI) have been designed around adversarial, or zero-sum, games. The goal of intelligent game-playing agents in these cases is to maximize their payoff. Simply put, they are designed to win the game. Central to the vast majority of techniques in AI is the notion of optimality, implying that the best performing techniques seek to find the solution to a problem that will result in the highest (or lowest) possible evaluation of some mathematical function. In adversarial games, this function typically evaluates to symmetric values such as +1 when the game is won and -1 when the game is lost. That is, winning or losing the game is an outcome or an end. While there may be a long sequence of actions that actually determine who wins or loses the game, for all intents and purposes, it is a single, terminal event that is evaluated and “maximized.” In recent years, similar approaches have been applied to newer game genres: real-time strategy, first person shooters, role-playing games, and other games in which the player is immersed in a virtual world. Despite the relative complexities of these environments compared to chess, the fundamental goals of the AI agents remain the same: to win the game. There is another perspective on game AI often advocated by developers of modern games: AI is a tool for increasing engagement and enjoyability. With this perspective in mind, game developers often take steps to “dumb down” the AI game playing agents by limiting their computational resources (Liden, 2003) or making suboptimal moves (West, 2008) such as holding back an attack until the player is ready or “rubber banding” to force strategic drawbacks if the AI ever gets the upper hand. The gameplaying agent is adversarial but is intentionally designed in an ad hoc manner to be non-competitive to make the player feel powerful.",
"title": ""
},
{
"docid": "8ae257994c6f412ceb843fcb98a67043",
"text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.",
"title": ""
},
{
"docid": "351c3696e70f93f221d2e3bb6ed6825c",
"text": "To meet the increasing requirements of HCI researchers who are looking into using liquid-based materials (e.g., hydrogels) to create novel interfaces, we present a design strategy for HCI researchers to build and customize a liquid-based smart material printing platform with off-the-shelf or easy-to-machine parts. For the hardware, we suggest a magnetic assembly-based modular design. These modularized parts can be easily and precisely reconfigured with off-the-shelf or easy-to-machine parts that can meet different processing requirements such as mechanical mixing, chemical reaction, light activation, and solution vaporization. In addition, xPrint supports an open-source, highly customizable software design and simulation platform, which is applicable for simulating and facilitating smart material constructions. Furthermore, compared to inkjet or pneumatic syringe-based printing systems, xPrint has a large range of printable materials from synthesized polymers to natural micro-organism-living cells with a printing resolution from 10μm up to 5mm (droplet size). In this paper, we will introduce the system design in detail and three use cases to demonstrate the material variability and the customizability for users with different demands (e.g., designers, scientific researchers, or artists).",
"title": ""
},
{
"docid": "ffc9a5b907f67e1cedd8f9ab0b45b869",
"text": "In this brief, we study the design of a feedback and feedforward controller to compensate for creep, hysteresis, and vibration effects in an experimental piezoactuator system. First, we linearize the nonlinear dynamics of the piezoactuator by accounting for the hysteresis (as well as creep) using high-gain feedback control. Next, we model the linear vibrational dynamics and then invert the model to find a feedforward input to account vibration - this process is significantly easier than considering the complete nonlinear dynamics (which combines hysteresis and vibration effects). Afterwards, the feedforward input is augmented to the feedback-linearized system to achieve high-precision highspeed positioning. We apply the method to a piezoscanner used in an experimental atomic force microscope to demonstrate the method's effectiveness and we show significant reduction of both the maximum and root-mean-square tracking error. For example, high-gain feedback control compensates for hysteresis and creep effects, and in our case, it reduces the maximum error (compared to the uncompensated case) by over 90%. Then, at relatively high scan rates, the performance of the feedback controlled system can be improved by over 75% (i.e., reduction of maximum error) when the inversion-based feedforward input is integrated with the high-gain feedback controlled system.",
"title": ""
},
{
"docid": "9dfaf1984bbe52394e115509c340be4d",
"text": "Internet of Things (IoT) can be thought of as the next big step in internet technology. It is enabled by the latest developments in communication technologies and internet protocols. This paper surveys IoT in respect of layer architecture, enabling technologies, related protocols and challenges.",
"title": ""
},
{
"docid": "325b97e73ea0a50d2413757e95628163",
"text": "Due to the recent advancement in procedural generation techniques, games are presenting players with ever growing cities and terrains to explore. However most sandbox-style games situated in cities, do not allow players to wander into buildings. In past research, space planning techniques have already been utilized to generate suitable layouts for both building floor plans and room layouts. We introduce a novel rule-based layout solving approach, especially suited for use in conjunction with procedural generation methods. We show how this solving approach can be used for procedural generation by providing the solver with a userdefined plan. In this plan, users can specify objects to be placed as instances of classes, which in turn contain rules about how instances should be placed. This approach gives us the opportunity to use our generic solver in different procedural generation scenarios. In this paper, we will illustrate mainly with interior generation examples.",
"title": ""
},
{
"docid": "517abd2ff0ed007c5011059d055e19e1",
"text": "Long Short-Term Memory (LSTM) is a particular type of recurrent neural network (RNN) that can model long term temporal dynamics. Recently it has been shown that LSTM-RNNs can achieve higher recognition accuracy than deep feed-forword neural networks (DNNs) in acoustic modelling. However, speaker adaption for LSTM-RNN based acoustic models has not been well investigated. In this paper, we study the LSTM-RNN speaker-aware training that incorporates the speaker information during model training to normalise the speaker variability. We first present several speaker-aware training architectures, and then empirically evaluate three types of speaker representation: I-vectors, bottleneck speaker vectors and speaking rate. Furthermore, to factorize the variability in the acoustic signals caused by speakers and phonemes respectively, we investigate the speaker-aware and phone-aware joint training under the framework of multi-task learning. In AMI meeting speech transcription task, speaker-aware training of LSTM-RNNs reduces word error rates by 6.5% relative to a very strong LSTM-RNN baseline, which uses FMLLR features.",
"title": ""
}
] |
scidocsrr
|
959d31b6136a86af2a19a7e380fe83cd
|
Concrete Dropout
|
[
{
"docid": "6952a28e63c231c1bfb43391a21e80fd",
"text": "Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.",
"title": ""
},
{
"docid": "4818e47ceaec70457701649832fb90c4",
"text": "Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that α = α(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize α(&thgr;) over &thgr;. Since α(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.\n In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines α(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.",
"title": ""
}
] |
[
{
"docid": "e6922a113d619784bd902c06863b5eeb",
"text": "Brake Analysis and NVH (Noise, Vibration and Harshness) Optimization have become critically important areas of application in the Automotive Industry. Brake Noise and Vibration costs approximately $1Billion/year in warranty work in Detroit alone. NVH optimization is now increasingly being used to predict the vehicle tactile and acoustic responses in relation to the established targets for design considerations. Structural optimization coupled with frequency response analysis is instrumental in driving the design process so that the design targets are met in a timely fashion. Usual design targets include minimization of vehicle weight, the adjustment of fundamental eigenmodes and the minimization of acoustic pressure or vibration at selected vehicle locations. Both, Brake Analysis and NVH Optimization are computationally expensive analyses involving eigenvalue calculations. From a computational sense and the viewpoint of MSC.Nastran, brake analysis exercises the CEAD (Complex Eigenvalue Analysis Dmap) module, while NVH optimization invokes the DSADJ (Design Sensitivity using ADJoint method DMAP) module. In this paper, two automotive applications are presented to demonstrate the performance improvements of the CEAD and DSADJ modules on NEC vector-parallel supercomputers. Dramatic improvements in the DSADJ module resulting in approx. 8-9 fold performance improvement as compared to MSC.Nastran V70 were observed for NVH optimization. Also, brake simulations and experiences at General Motors will be presented. This analysis method has been successfully applied to 4 different programs at GM and the simulation results were consistent with laboratory experiments on test vehicles.",
"title": ""
},
{
"docid": "f7aac91b892013cfdc1302890cb7a263",
"text": "We study the problem of learning a generalizable action policy for an intelligent agent to actively approach an object of interest in indoor environment solely from its visual inputs. While scene-driven or recognition-driven visual navigation has been widely studied, prior efforts suffer severely from the limited generalization capability. In this paper, we first argue the object searching task is environment dependent while the approaching ability is general. To learn a generalizable approaching policy, we present a novel solution dubbed as GAPLE which adopts two channels of visual features: depth and semantic segmentation, as the inputs to the policy learning module. The empirical studies conducted on the House3D dataset as well as on a physical platform in a real world scenario validate our hypothesis, and we further provide indepth qualitative analysis.",
"title": ""
},
{
"docid": "6059cfa690c2de0a8c883aa741000f3a",
"text": "We study how a viewer can control a television set remotely by hand gestures. We address two fundamental issues of gesture{based human{computer interaction: (1) How can one communicate a rich set of commands without extensive user training and memorization of gestures? (2) How can the computer recognize the commands in a complicated visual environment? Our solution to these problems exploits the visual feedback of the television display. The user uses only one gesture: the open hand, facing the camera. He controls the television by moving his hand. On the display, a hand icon appears which follows the user's hand. The user can then move his own hand to adjust various graphical controls with the hand icon. The open hand presents a characteristic image which the computer can detect and track. We perform a normalized correlation of a template hand to the image to analyze the user's hand. A local orientation representation is used to achieve some robustness to lighting variations. We made a prototype of this system using a computer workstation and a television. The graphical overlays appear on the computer screen, although they could be mixed with the video to appear on the television. The computer controls the television set through serial port commands to an electronically controlled remote control. We describe knowledge we gained from building the prototype.",
"title": ""
},
{
"docid": "0b51889817aca2afd7c1c754aa47f7de",
"text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.",
"title": ""
},
{
"docid": "72feacaf7e0a860e72afec1b14b5c7e7",
"text": "In recent years, deep learning has been used extensively in a wide range of fields. In deep learning, Convolutional Neural Networks are found to give the most accurate results in solving real world problems. In this paper, we give a comprehensive summary of the applications of CNN in computer vision and natural language processing. We delineate how CNN is used in computer vision, mainly in face recognition, scene labelling, image classification, action recognition, human pose estimation and document analysis. Further, we describe how CNN is used in the field of speech recognition and text classification for natural language processing. We compare CNN with other methods to solve the same problem and explain why CNN is better than other methods. Keywords— Deep Learning, Convolutional Neural Networks, Computer Vision, Natural Language",
"title": ""
},
{
"docid": "a6ce059863bc504242dff00025791b01",
"text": "We examined allelic polymorphisms of the serotonin transporter (5-HTT) gene and antidepressant response to 6 weeks' treatment with the selective serotonin reuptake inhibitor (SSRI) drugs fluoxetine or paroxetine. We genotyped 120 patients and 252 normal controls, using polymerase chain reaction of genomic DNA with primers flanking the second intron and promoter regions of the 5-HTT gene. Diagnosis of depression was not associated with 5-HTT polymorphisms. Patients homozygous l/l in intron 2 or homozygous s/s in the promoter region showed better responses than all others (p < 0.0001, p = 0.0074, respectively). Lack of the l/l allele form in intron 2 most powerfully predicted non-response (83.3%). Response to SSRI drugs is related to allelic variation in the 5-HTT gene in depressed Korean patients.",
"title": ""
},
{
"docid": "6b2da7b4cd57371c9eaac129184df942",
"text": "Time series data are common in a variety of fields ranging from economics to medicine and manufacturing. As a result, time series analysis and modeling has become an active research area in statistics and data mining. In this paper, we focus on a type of change we call contextual time series change (CTC) and propose a novel two-stage algorithm to address it. In contrast to traditional change detection methods, which consider each time series separately, CTC is defined as a change relative to the behavior of a group of related time series. As a result, our proposed method is able to identify novel types of changes not found by other algorithms. We demonstrate the unique capabilities of our approach with several case studies on real-world datasets from the financial and Earth science domains.",
"title": ""
},
{
"docid": "53fcf4f5285b7a93d99d2c222dfe21dd",
"text": "OBJECTIVES\nTo determine whether the use of a near-infrared light venipuncture aid (VeinViewer; Luminetx Corporation, Memphis, Tenn) would improve the rate of successful first-attempt placement of intravenous (IV) catheters in a high-volume pediatric emergency department (ED).\n\n\nMETHODS\nPatients younger than 20 years with standard clinical indications for IV access were randomized to have IV placement by ED nurses (in 3 groups stratified by 5-year blocks of nursing experience) using traditional methods (standard group) or with the aid of the near-infrared light source (device group). If a vein could not be cannulated after 3 attempts, patients crossed over from one study arm to the other, and study nurses attempted placement with the alternative technique. The primary end point was first-attempt success rate for IV catheter placement. After completion of patient enrollment, a questionnaire was completed by study nurses as a qualitative assessment of the device.\n\n\nRESULTS\nA total of 123 patients (median age, 3 years) were included in the study: 62 in the standard group and 61 in the device group. There was no significant difference in first-attempt success rate between the standard (79.0%, 95% confidence interval [CI], 66.8%-88.3%) and device (72.1%, 95% CI, 59.2%-82.9%) groups. Of the 19 study nurses, 14 completed the questionnaire of whom 70% expressed neutral or unfavorable assessments of the device in nondehydrated patients without chronic underlying medical conditions and 90% found the device a helpful tool for patients in whom IV access was difficult.\n\n\nCONCLUSIONS\nFirst-attempt success rate for IV placement was nonsignificantly higher without than with the assistance of a near-infrared light device in a high-volume pediatric ED. Nurses placing IVs did report several benefits to use of the device with specific patient groups, and future research should be conducted to demonstrate the role of the device in these patients.",
"title": ""
},
{
"docid": "ef81266ae8c2023ea35dca8384db3803",
"text": "Linked Open Data has been recognized as a useful source of background knowledge for building content-based recommender systems. Vast amount of RDF data, covering multiple domains, has been published in freely accessible datasets. In this paper, we present an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs used for building content-based recommender system. We generate sequences by leveraging local information from graph sub-structures and learn latent numerical representations of entities in RDF graphs. Our evaluation on two datasets in the domain of movies and books shows that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be effectively used in content-based recommender systems.",
"title": ""
},
{
"docid": "91d59b5e08c711e25d83785c198d9ae1",
"text": "The increase in the wireless users has led to the spectrum shortage problem. Federal Communication Commission (FCC) showed that licensed spectrum bands are underutilized, specially TV bands. The IEEE 802.22 standard was proposed to exploit these white spaces in the (TV) frequency spectrum. Cognitive Radio allows unlicensed users to use licensed bands while safeguarding the priority of licensed users. Cognitive Radio is composed of two types of users, licensed users also known as Primary Users(PUs) and unlicensed users also known as Secondary Users(SUs).SUs use the resources when spectrum allocated to PU is vacant, as soon as PU become active, the SU has to leave the channel for PU. Hence the opportunistic access is provided by CR to SUs whenever the channel is vacant. Cognitive Users sense the spectrum continuously and share this sensing information to other SUs, during this spectrum sensing, the network is vulnerable to so many attacks. One of these attacks is Primary User Emulation Attack (PUEA), in which the malicious secondary users can mimic the characteristics of primary users thereby causing legitimate SUs to erroneously identify the attacker as a primary user, and to gain access to wireless channels. PUEA is of two types: Selfish and Malicious attacker. A selfish attacker aims in stealing Bandwidth form legitimate SUs for its own transmissions while malicious attacker mimic the characteristics of PU.",
"title": ""
},
{
"docid": "320bd26aa73ca080de8ba1da70809ee3",
"text": "Attention-based sequence-to-sequence model has proved successful in Neural Machine Translation (NMT). However, the attention without consideration of decoding history, which includes the past information in the decoder and the attention mechanism, often causes much repetition. To address this problem, we propose the decoding-history-based Adaptive Control of Attention (ACA) for the NMT model. ACA learns to control the attention by keeping track of the decoding history and the current information with a memory vector, so that the model can take the translated contents and the current information into consideration. Experiments on Chinese-English translation and the EnglishVietnamese translation have demonstrated that our model significantly outperforms the strong baselines. The analysis shows that our model is capable of generating translation with less repetition and higher accuracy. The code will be available at https://github.com/lancopku",
"title": ""
},
{
"docid": "ba4260598a634bcfdfb7423182c4c8b6",
"text": "A wide range of computational methods and tools for data analysis are available. In this study we took advantage of those available technological advancements to develop prediction models for the prediction of a Type-2 Diabetic Patient. We aim to investigate how the diabetes incidents are affected by patients’ characteristics and measurements. Efficient predictive modeling is required for medical researchers and practitioners. This study proposes Hybrid Prediction Model (HPM) which uses Simple K-means clustering algorithm aimed at validating chosen class label of given data (incorrectly classified instances are removed, i.e. pattern extracted from original data) and subsequently applying the classification algorithm to the result set. C4.5 algorithm is used to build the final classifier model by using the k-fold cross-validation method. The Pima Indians diabetes data was obtained from the University of California at Irvine (UCI) machine learning repository datasets. A wide range of different classification methods have been applied previously by various researchers in order to find the best performing algorithm on this dataset. The accuracies achieved have been in the range of 59.4–84.05%. However the proposed HPM obtained a classification accuracy of 92.38%. In order to evaluate the performance of the proposed method, sensitivity and specificity performance measures that are used commonly in medical classification studies were used. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "18969bed489bb9fa7196634a8086449e",
"text": "A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated.",
"title": ""
},
{
"docid": "1b656c70d5ccd8fffc78242a07f650fd",
"text": "Semantic image parsing, which refers to the process of decomposing images into semantic regions and constructing the structure representation of the input, has recently aroused widespread interest in the field of computer vision. The recent application of deep representation learning has driven this field into a new stage of development. In this paper, we summarize three aspects of the progress of research on semantic image parsing, i.e., category-level semantic segmentation, instance-level semantic segmentation, and beyond segmentation. Specifically, we first review the general frameworks for each task and introduce the relevant variants. The advantages and limitations of each method are also discussed. Moreover, we present a comprehensive comparison of different benchmark datasets and evaluation metrics. Finally, we explore the future trends and challenges of semantic image parsing.",
"title": ""
},
{
"docid": "1a41bd991241ed1751beda2362465a0d",
"text": "Over the last decade, Convolutional Neural Networks (CNN) saw a tremendous surge in performance. However, understanding what a network has learned still proves to be a challenging task. To remedy this unsatisfactory situation, a number of groups have recently proposed different methods to visualize the learned models. In this work we suggest a general taxonomy to classify and compare these methods, subdividing the literature into three main categories and providing researchers with a terminology to base their works on. Furthermore, we introduce the FeatureVis library for MatConvNet: an extendable, easy to use open source library for visualizing CNNs. It contains implementations from each of the three main classes of visualization methods and serves as a useful tool for an enhanced understanding of the features learned by intermediate layers, as well as for the analysis of why a network might fail for certain examples.",
"title": ""
},
{
"docid": "2e20202f0d5e0ab315f9471f3c0e9877",
"text": "A sudden comprehension that solves a problem, reinterprets a situation, explains a joke, or resolves an ambiguous percept is called an insight (i.e., the ‘‘Aha! moment’’). Psychologists have studied insight using behavioral methods for nearly a century. Recently, the tools of cognitive neuroscience have been applied to this phenomenon. A series of studies have used electroencephalography (EEG) and functionalmagnetic resonance imaging (fMRI) to study the neural correlates of the ‘‘Aha! moment’’ and its antecedents. Although the experience of insight is sudden and can seem disconnected from the immediately preceding thought, these studies show that insight is the culmination of a series of brain states and processes operating at different time scales. Elucidation of these precursors suggests interventional opportunities for the facilitation of insight. KEYWORDS—Aha!moment; creativity; EEG; fMRI; insight; neuroimaging; problem solving Insight is a sudden comprehension—colloquially called the ‘‘Aha! moment’’—that can result in a new interpretation of a situation and that can point to the solution to a problem (Sternberg &Davidson, 1995). Insights are often the result of the reorganization or restructuring of the elements of a situation or problem, though an insight may occur in the absence of any preexisting interpretation. For several reasons, insight is an important phenomenon. First, it is a form of cognition that occurs in a number of domains. For example, aside from yielding the solution to a problem, insight can also yield the understanding of a joke or metaphor, the identification of an object in an ambiguous or blurry picture, or a realization about oneself. Second, insight contrasts with the deliberate, conscious search strategies that have been the focus of most research on problem solving (Ericsson & Simon, 1993); instead, insights occur when a solution is computed unconsciously and later emerges into awareness suddenly (Bowden & Jung-Beeman, 2003a; Smith & Kounios, 1996). Third, because insight involves a conceptual reorganization that results in a new, nonobvious interpretation, it is often identified as a form of creativity (Friedman & Förster, 2005). Fourth, insights can result in important innovations. Understanding the mechanisms that make insights possible may lead to methods for facilitating innovation. AN APPROACH TO STUDYING INSIGHT In our studies, we have used electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) to examine processes that would be difficult to detect using behavioral measurements alone. EEG has the benefit of high temporal resolution; fMRI complements EEGby affording the high spatial resolution necessary for precise localization of brain activity. We used a type of problem called compound remote associates (Bowden & Jung-Beeman, 2003b) that affords two advantages. When a participant solves one of these problems, he or she can typically do so within 10 seconds; much longer time is often needed to solve classic insight problems (Fleck & Weisberg, 2004). This relatively short solution time allowed us to produce the large number of trials necessary for EEG and fMRI. In addition, compound-remote-associates problems can be solved either with or without insight, enabling researchers to compare insight and analytic solving without changing the type of problem. In our experiments, compound remote associates that were solved by insight and by analytic processing were sorted according to participants’ trial-by-trial judgments of how the solution entered awareness—suddenly for insight, incrementally for analytic processing. Each compound-remote-associates problem consists of three words (e.g., crab, pine, sauce). Participants are instructed to think of a single word that can form a compound or familiar twoword phrase with each of the three problemwords (e.g., apple can join with crab, pine, and sauce to form pineapple, crabapple, and applesauce). As soon as participants think of the solution word, Address correspondence to John Kounios, Department of Psychology, DrexelUniversity, 245N. 15 Street,Mail Stop 626, Philadelphia, PA 19102-1192, e-mail: john.kounios@gmail.com; or Mark Beeman, Department of Psychology, Northwestern University, 2029 Sheridan Road, Evanston, IL 60208-2710, e-mail: mjungbee@northwestern.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 210 Volume 18—Number 4 Copyright r 2009 Association for Psychological Science they press a button as quickly as possible. Participants are instructed to respond immediately and not take any time to verify this solution. They are then prompted to verbalize the solution and then to press a button to indicate whether that solution had popped into awareness suddenly (insight) or whether the solution had resulted from a more methodical hypothesis-testing approach. An example of a methodical strategy for solving the problem would be to start with crab and generate associates of this word, such as cake.Crabcake is an acceptable compound, as is applecake. But pinecake and cakepine are both unacceptable, leading to the rejection of cake as a potential solution. One might then try grass.Crabgrass is acceptable, but neither pinegrass nor applegrass works—and so on. Participants in our studies immediately and intuitively understood the distinction between sudden insight and methodical solving. NEURAL CORRELATES OF THE ‘‘AHA! MOMENT’’ Our first neuroimaging study included separate EEG and fMRI experiments that examined brain activity during a time interval beginning shortly before the derivation of the solution (JungBeeman et al., 2004). Brain activity corresponding to analytic G am m a Po w er 1.7e-10",
"title": ""
},
{
"docid": "a274e05ba07259455d0e1fef57f2c613",
"text": "Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover images. The Least Significant Bit (LSB) steganography that replaces the least significant bits of the host medium is a widely used technique with low computational complexity and high insertion capacity. Although it has good perceptual transparency, it is vulnerable to steganalysis which is based on histogram analysis. In all the existing schemes detection of a secret message in a cover image can be easily detected from the histogram analysis and statistical analysis. Therefore developing new LSB steganography algorithms against statistical and histogram analysis is the prime requirement.",
"title": ""
},
{
"docid": "84ba070a14da00c37de479e62e78f126",
"text": "The EEG (Electroencephalogram) signal indicates the electrical activity of the brain. They are highly random in nature and may contain useful information about the brain state. However, it is very difficult to get useful information from these signals directly in the time domain just by observing them. They are basically non-linear and nonstationary in nature. Hence, important features can be extracted for the diagnosis of different diseases using advanced signal processing techniques. In this paper the effect of different events on the EEG signal, and different signal processing methods used to extract the hidden information from the signal are discussed in detail. Linear, Frequency domain, time - frequency and non-linear techniques like correlation dimension (CD), largest Lyapunov exponent (LLE), Hurst exponent (H), different entropies, fractal dimension(FD), Higher Order Spectra (HOS), phase space plots and recurrence plots are discussed in detail using a typical normal EEG signal.",
"title": ""
},
{
"docid": "ecad03ca039000bdefe2ef70d5b65ec1",
"text": "BACKGROUND\nThe effectiveness of complex interventions, as well as their success in reaching relevant populations, is critically influenced by their implementation in a given context. Current conceptual frameworks often fail to address context and implementation in an integrated way and, where addressed, they tend to focus on organisational context and are mostly concerned with specific health fields. Our objective was to develop a framework to facilitate the structured and comprehensive conceptualisation and assessment of context and implementation of complex interventions.\n\n\nMETHODS\nThe Context and Implementation of Complex Interventions (CICI) framework was developed in an iterative manner and underwent extensive application. An initial framework based on a scoping review was tested in rapid assessments, revealing inconsistencies with respect to the underlying concepts. Thus, pragmatic utility concept analysis was undertaken to advance the concepts of context and implementation. Based on these findings, the framework was revised and applied in several systematic reviews, one health technology assessment (HTA) and one applicability assessment of very different complex interventions. Lessons learnt from these applications and from peer review were incorporated, resulting in the CICI framework.\n\n\nRESULTS\nThe CICI framework comprises three dimensions-context, implementation and setting-which interact with one another and with the intervention dimension. Context comprises seven domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, ethical, legal, political); implementation consists of five domains (i.e., implementation theory, process, strategies, agents and outcomes); setting refers to the specific physical location, in which the intervention is put into practise. The intervention and the way it is implemented in a given setting and context can occur on a micro, meso and macro level. Tools to operationalise the framework comprise a checklist, data extraction tools for qualitative and quantitative reviews and a consultation guide for applicability assessments.\n\n\nCONCLUSIONS\nThe CICI framework addresses and graphically presents context, implementation and setting in an integrated way. It aims at simplifying and structuring complexity in order to advance our understanding of whether and how interventions work. The framework can be applied in systematic reviews and HTA as well as primary research and facilitate communication among teams of researchers and with various stakeholders.",
"title": ""
},
{
"docid": "546296aecaee9963ee7495c9fbf76fd4",
"text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.",
"title": ""
}
] |
scidocsrr
|
f970e045521e41af22bcb2716fe7a745
|
Real-time 6-DOF monocular visual SLAM in a large-scale environment
|
[
{
"docid": "182cc1785fdd5b5d33d3253873c97683",
"text": "The Perspective-Three-Point (P3P) problem aims at determining the position and orientation of the camera in the world reference frame from three 2D-3D point correspondences. This problem is known to provide up to four solutions that can then be disambiguated using a fourth point. All existing solutions attempt to first solve for the position of the points in the camera reference frame, and then compute the position and orientation of the camera in the world frame, which alignes the two point sets. In contrast, in this paper we propose a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame. This is made possible by introducing intermediate camera and world reference frames, and expressing their relative position and orientation using only two parameters. The projection of a world point into the parametrized camera pose then leads to two conditions and finally a quartic equation for finding up to four solutions for the parameter pair. A subsequent backsubstitution directly leads to the corresponding camera poses with respect to the world reference frame. We show that the proposed algorithm offers accuracy and precision comparable to a popular, standard, state-of-the-art approach but at much lower computational cost (15 times faster). Furthermore, it provides improved numerical stability and is less affected by degenerate configurations of the selected world points. The superior computational efficiency is particularly suitable for any RANSAC-outlier-rejection step, which is always recommended before applying PnP or non-linear optimization of the final solution.",
"title": ""
}
] |
[
{
"docid": "432ea666011ccf3b2fd0cb1d9eb1baa9",
"text": "A fully developed nomology for the study of games requires the development of explanatory theoretical constructs associated with validating observational techniques. Drawing from cognition sciences, a framework is proposed based upon the integration of schema theory with attention theory. Cognitive task analysis provides a foundation for preliminary schema descriptions, which can then be elaborated according to more detailed models of cognitive and attentional processes. The resulting theory provides a rich explanatory framework for the cognitive processes underlying game play, as well as detailed hypotheses for the hierarchical structure of pleasures and rewards motivating players. Game engagement is accounted for as a process of schema selection or development, while immersion is explained in terms of schema execution. This framework is being developed not only to explain the substructures of game play, but also to provide schema models that may inform game design processes and provide detailed criteria for the design of patterns of game features for entertainment, pedagogical and therapeutic purposes.",
"title": ""
},
{
"docid": "f43b34ca1bbb85851672ff55a60f0785",
"text": "In this paper, we propose an optimized mutual authentication scheme which can keep most password authentication benefits, meanwhile improve the security property by using encryption primitives. Our proposed scheme not only offers webmasters a reasonable secure client authentication, but also offers good user experience. Security analysis demonstrates that the proposed authentication scheme can achieve the security requirements, and also resist the diverse possible attacks.",
"title": ""
},
{
"docid": "a6690e9d1e0682d7bbfdb5f4397c9b4d",
"text": "_______________ Task-based learning is a popular topic in ELT/EFL circles nowadays. It is accepted by its proponents as a flourishing method that may replace Communicative Language Learning. However, it can also be seen as an adventure just because there are almost no experimental studies to tackle questions concerning applicability of Task-based Learning. In this paper we try to find out whether or not task-based writing activities have a positive effect upon reading comprehension in English as a foreign language. An experimental study was conducted in order to scrutinize implications of Task-based Learning. Two groups of 28 students were chosen through random cluster sampling. Both groups were given a pre-test and a post-test. The pre-test and post-test mean scores of the experimental group, which got treatment through task-based writing activities, were compared with those of the control group, which was taught English through traditional methods. The effect of the treatment upon reading comprehension was analyzed through two-way ANOVA. The results provide a theoretical justification for the claims of the proponents of Task-based Learning. Theoretical Background Researchers have been discussing and asserting that the Communicative Language Teaching, a method which has a worldwide use nowadays, has some important drawbacks. Having been based on principles of first language acquisition, it lacks a proper theoretical basis about language learning as a cognitive process of skill acquisition and a clear research about second language acquisition (Klapper, 2003:33-34). It puts much emphasis on ‘communication’, pair work, information-gap activities, and intensive target language use (Pica, 2000; Richards and Rodgers, 1996). However, teachers and practitioners have encountered some problems while applying it. One of the most important problems was the demotivation of students because of intensive target language use. Task-based Learning is a flourishing method which can compensate for the weaknesses of the Communicative Language Teaching mentioned above and which is seen as an alternative to it by researchers (Klapper, 2003:35-36). ‘Task’ is taken as a goal-oriented activity which has a clear purpose and which involves achieving an outcome, creating a final",
"title": ""
},
{
"docid": "cb654fe04058c8c820352136cc7fe1d4",
"text": "We describe the systems of NLP-CIC team that participated in the Complex Word Identification (CWI) 2018 shared task. The shared task aimed to benchmark approaches for identifying complex words in English and other languages from the perspective of non-native speakers. Our goal is to compare two approaches: feature engineering and a deep neural network. Both approaches achieved comparable performance on the English test set. We demonstrated the flexibility of the deeplearning approach by using the same deep neural network setup in the Spanish track. Our systems achieved competitive results: all our systems were within 0.01 of the system with the best macro-F1 score on the test sets except on Wikipedia test set, on which our best system is 0.04 below the best macro-F1 score.",
"title": ""
},
{
"docid": "d54c9a54622a6f5814f00d7193f8dc3b",
"text": "Internet of Things (IoT) software is required not only to dispose of huge volumes of real-time and heterogeneous data, but also to support different complex applications for business purposes. Using an ontology approach, a Configurable Information Service Platform is proposed for the development of IoT-based application. Based on an abstract information model, information encapsulating, composing, discomposing, transferring, tracing, and interacting in Product Lifecycle Management could be carried out. Combining ontology and representational state transfer (REST)-ful service, the platform provides an information support base both for data integration and intelligent interaction. A case study is given to verify the platform. It is shown that the platform provides a promising way to realize IoT application in semantic level.",
"title": ""
},
{
"docid": "466f4ed7a59f9b922a8b87685d8f3a77",
"text": "Ten cases of oral hairy leukoplakia (OHL) in HIV- negative patients are presented. Eight of the 10 patients were on steroid treatment for chronic obstructive pulmonary disease, 1 patient was on prednisone as part of a therapeutic regimen for gastrointestinal stromal tumor, and 1 patient did not have any history of immunosuppression. There were 5 men and 5 women, ages 32-79, with mean age being 61.8 years. Nine out of 10 lesions were located unilaterally on the tongue, whereas 1 lesion was located at the junction of the hard and soft palate. All lesions were described as painless, corrugated, nonremovable white plaques (leukoplakias). Histologic features were consistent with Epstein-Barr virus-associated hyperkeratosis suggestive of OHL, and confirmatory in situ hybridization was performed in all cases. Candida hyphae and spores were present in 8 cases. Pathologists should be aware of OHL presenting not only in HIV-positive and HIV-negative organ transplant recipients but also in patients receiving steroid treatment, and more important, certain histologic features should raise suspicion for such diagnosis without prior knowledge of immunosuppression.",
"title": ""
},
{
"docid": "350d1717a5192873ef9e0ac9ed3efc7b",
"text": "OBJECTIVE\nTo describe the effects of percutaneously implanted valve-in-valve in the tricuspid position for patients with pre-existing transvalvular device leads.\n\n\nMETHODS\nIn this case series, we describe implantation of the Melody valve and SAPIEN XT valve within dysfunctional bioprosthetic tricuspid valves in three patients with transvalvular device leads.\n\n\nRESULTS\nIn all cases, the valve was successfully deployed and device lead function remained unchanged. In 1/3 cases with 6-month follow-up, device lead parameters remain unchanged and transcatheter valve-in-valve function remains satisfactory.\n\n\nCONCLUSIONS\nTranscatheter tricuspid valve-in-valve is feasible in patients with pre-existing transvalvular devices leads. Further study is required to determine the long-term clinical implications of this treatment approach.",
"title": ""
},
{
"docid": "519b0dbeb1193a14a06ba212790f49d4",
"text": "In recent years, sign language recognition has attracted much attention in computer vision . A sign language is a means of conveying the message by using hand, arm, body, and face to convey thoughts and meanings. Like spoken languages, sign languages emerge and evolve naturally within hearing-impaired communities. However, sign languages are not universal. There is no internationally recognized and standardized sign language for all deaf people. As is the case in spoken language, every country has got its own sign language with high degree of grammatical variations. The sign language used in India is commonly known as Indian Sign Language (henceforth called ISL).",
"title": ""
},
{
"docid": "e644b698d2977a2c767fe86a1445e23c",
"text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.",
"title": ""
},
{
"docid": "fba2cce267a075c24a1378fd55de6113",
"text": "This paper presents a novel mixed reality rehabilitation system used to help improve the reaching movements of people who have hemiparesis from stroke. The system provides real-time, multimodal, customizable, and adaptive feedback generated from the movement patterns of the subject's affected arm and torso during reaching to grasp. The feedback is provided via innovative visual and musical forms that present a stimulating, enriched environment in which to train the subjects and promote multimodal sensory-motor integration. A pilot study was conducted to test the system function, adaptation protocol and its feasibility for stroke rehabilitation. Three chronic stroke survivors underwent training using our system for six 75-min sessions over two weeks. After this relatively short time, all three subjects showed significant improvements in the movement parameters that were targeted during training. Improvements included faster and smoother reaches, increased joint coordination and reduced compensatory use of the torso and shoulder. The system was accepted by the subjects and shows promise as a useful tool for physical and occupational therapists to enhance stroke rehabilitation.",
"title": ""
},
{
"docid": "05ab4fa15696ee8b47e017ebbbc83f2c",
"text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.",
"title": ""
},
{
"docid": "e0a314eb1fe221791bc08094d0c04862",
"text": "The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students.",
"title": ""
},
{
"docid": "6d83a242e4e0a0bd0d65c239e0d6777f",
"text": "Traditional clustering algorithms consider all of the dimensions of an input data set equally. However, in the high dimensional data, a common property is that data points are highly clustered in subspaces, which means classes of objects are categorized in subspaces rather than the entire space. Subspace clustering is an extension of traditional clustering that seeks to find clusters in different subspaces categorical data and its corresponding time complexity is analyzed as well. In the proposed algorithm, an additional step is added to the k-modes clustering process to automatically compute the weight of all dimensions in each cluster by using complement entropy. Furthermore, the attribute weight can be used to identify the subsets of important dimensions that categorize different clusters. The effectiveness of the proposed algorithm is demonstrated with real data sets and synthetic data sets. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b9efdf790c52c63a589719ad58b0e647",
"text": "This paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning.",
"title": ""
},
{
"docid": "2828aa692e439502de5c950df01701ab",
"text": "The Internet of Things (IoT) was of a vision in which all physical objects are tagged and uniquely identified using RFID transponders or readers. Nowadays, research into the IoT has extended this vision to the connectivity of Things to anything, anyone, anywhere and at anytime. The IoT has grown into multiple dimensions, which encompasses various networks of applications, computers, devices, as well as physical and virtual objects, referred to as things or objects, that are interconnected together using communication technologies such as, wireless, wired and mobile networks, RFID, Bluetooth, GPS systems, and other evolving technologies. This paradigm is a major shift from an essentially computer-based network model to a fully distributed network of smart objects. This change poses serious challenges in terms of architecture, connectivity, efficiency, security and provision of services among many others. This paper studies the state-of-the art of the IoT. In addition, some major security and privacy issues are described and a new attack vector is introduced, referred to as the “automated invasion attack”.",
"title": ""
},
{
"docid": "08134d0d76acf866a71d660062f2aeb8",
"text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.",
"title": ""
},
{
"docid": "196fb4c83bf2a0598869698d56a6e1da",
"text": "Mammals adapted to a great variety of habitats with different accessibility to water. In addition to changes in kidney morphology, e.g. the length of the loops of Henle, several hormone systems are involved in adaptation to limited water supply, among them the renal-neurohypophysial vasopressin/vasopressin receptor system. Comparison of over 80 mammalian V2 vasopressin receptor (V2R) orthologs revealed high structural and functional conservation of this key component involved in renal water reabsorption. Although many mammalian species have unlimited access to water there is no evidence for complete loss of V2R function indicating an essential role of V2R activity for survival even of those species. In contrast, several marsupial V2R orthologs show a significant increase in basal receptor activity. An increased vasopressin-independent V2R activity can be interpreted as a shift in the set point of the renal-neurohypophysial hormone circuit to realize sufficient water reabsorption already at low hormone levels. As found in other desert mammals arid-adapted marsupials show high urine osmolalities. The gain of basal V2R function in several marsupials may contribute to the increased urine concentration abilities and, therefore, provide an advantage to maintain water and electrolyte homeostasis under limited water supply conditions.",
"title": ""
},
{
"docid": "7a77d8d381ec543033626be54119358a",
"text": "The advent of continuous glucose monitoring (CGM) is a significant stride forward in our ability to better understand the glycemic status of our patients. Current clinical practice employs two forms of CGM: professional (retrospective or \"masked\") and personal (real-time) to evaluate and/or monitor glycemic control. Most studies using professional and personal CGM have been done in those with type 1 diabetes (T1D). However, this technology is agnostic to the type of diabetes and can also be used in those with type 2 diabetes (T2D). The value of professional CGM in T2D for physicians, patients, and researchers is derived from its ability to: (1) to discover previously unknown hyper- and hypoglycemia (silent and symptomatic); (2) measure glycemic control directly rather than through the surrogate metric of hemoglobin A1C (HbA1C) permitting the observation of a wide variety of metrics that include glycemic variability, the percent of time within, below and above target glucose levels, the severity of hypo- and hyperglycemia throughout the day and night; (3) provide actionable information for healthcare providers derived by the CGM report; (4) better manage patients on hemodialysis; and (5) effectively and efficiently analyze glycemic effects of new interventions whether they be pharmaceuticals (duration of action, pharmacodynamics, safety, and efficacy), devices, or psycho-educational. Personal CGM has also been successfully used in a small number of studies as a behavior modification tool in those with T2D. This comprehensive review describes the differences between professional and personal CGM and the evidence for the use of each form of CGM in T2D. Finally, the opinions of key professional societies on the use of CGM in T2D are presented.",
"title": ""
},
{
"docid": "52a4a964d408d6e66d6864d573ee800f",
"text": "Toxoplasma gondii causes fatal multisystemic disease in New World primates, with respiratory failure and multifocal necrotic lesions. Although cases and outbreaks of toxoplasmosis have been described, there are few genotyping studies and none has included parasite load quantification. In this article, we describe two cases of lethal acute toxoplasmosis in squirrel monkeys (Saimiri sciureus) of Mexico city. The main pathological findings included pulmonary edema, interstitial pneumonia, hepatitis and necrotizing lymphadenitis, and structures similar to T. gondii tachyzoites observed by histopathology in these organs. Diagnosis was confirmed by immunohistochemistry, transmission electron microscopy and both end point and real time PCR. The load was between <14 and 23 parasites/mg tissue. Digestion of the SAG3 gene amplicon showed similar bands to type I reference strains. These are the first cases of toxoplasmosis in primates studied in Mexico, with clinical features similar to others reported in Israel and French Guiana, although apparently caused by a different T. gondii variant.",
"title": ""
},
{
"docid": "ab7184c576396a1da32c92093d606a53",
"text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.",
"title": ""
}
] |
scidocsrr
|
3ef781c1149c6cdabcd142c710699dc8
|
A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing
|
[
{
"docid": "0cb0d05320a9de415b51c99e4766bbeb",
"text": "We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.",
"title": ""
}
] |
[
{
"docid": "0c5b906696fb1f2abe6b21bb2c5808b8",
"text": "Fisher score and Laplacian score are two popular feature selection algorithms, both of which belong to the general graph-based feature selection framework. In this framework, a feature subset is selected based on the corresponding score (subset-level score), which is calculated in a trace ratio form. Since the number of all possible feature subsets is very huge, it is often prohibitively expensive in computational cost to search in a brute force manner for the feature subset with the maximum subset-level score. Instead of calculating the scores of all the feature subsets, traditional methods calculate the score for each feature, and then select the leading features based on the rank of these feature-level scores. However, selecting the feature subset based on the feature-level score cannot guarantee the optimum of the subset-level score. In this paper, we directly optimize the subset-level score, and propose a novel algorithm to efficiently find the global optimal feature subset such that the subset-level score is maximized. Extensive experiments demonstrate the effectiveness of our proposed algorithm in comparison with the traditional methods for feature selection. Introduction Many classification tasks often need to deal with highdimensional data. Data with a large number of features will result in higher computational cost, and the irrelevant and redundant features may also deteriorate the classification performance. Feature selection is one of the most important approaches for dealing with high-dimensional data (Guyon & Elisseeff 2003). According to the strategy of utilizing class label information, feature selection algorithms can be roughly divided into three categories, namely unsupervised feature selection (Dy & Brodley 2004), semisupervised feature selection (Zhao & Liu 2007a), and supervised feature selection (Robnik-Sikonja & Kononenko 2003). These feature selection algorithms can also be categorized into wrappers and filters (Kohavi & John 1997; Das 2001). Wrappers are classifier-specific and the feature subset is selected directly based on the performance of a specific classifier. Filters are classifier-independent and the Copyright c © 2008,Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. feature subset is selected based on a well-defined criterion. Usually, wrappers could obtain better results than filters because wrappers are directly related to the algorithmic performance of a specific classifier. However, wrappers are computationally more expensive compared with filters and lack of good generalization capability over classifiers. Fisher score (Bishop 1995) and Laplacian score (He, Cai, & Niyogi 2005) are two popular filter-type methods for feature selection, and both belong to the general graph-based feature selection framework. In this framework, the feature subset is selected based on the score of the entire feature subset, and the score is calculated in a trace ratio form. The trace ratio form has been successfully used as a general criterion for feature extraction previously (Nie, Xiang, & Zhang 2007; Wanget al. 2007). However, when the trace ratio criterion is applied for feature selection, since the number of possible subsets of features is very huge, it is often prohibitively expensive in computational cost to search in a brute force manner for the feature subset with the maximum subset-level score. Therefore, instead of calculating the subset-level score for all the feature subsets, traditional methods calculate the score of each feature (feature-level score), and then select the leading features based on the rank of these feature-level scores. The selected subset of features based on the feature-level score is suboptimal, and cannot guarantee the optimum of the subset-level score. In this paper, we directly optimize the subset-level score, and propose a novel iterative algorithm to efficiently find the globally optimal feature subset such that the subset-level score is maximized. Experimental results on UCI datasets and two face datasets demonstrate the effectiveness of the proposed algorithm in comparison with the traditional methods for feature selection. Feature Selection ⊂ Subspace Learning Suppose the original high-dimensional data x ∈ R, that is, the number of features (dimensions) of the data is d. The task of subspace learning is to find the optimal projection matrix W ∈ R (usuallym ≪ d) under an appropriate criterion, and then thed-dimensional datax is transformed to them-dimensional datay by y = W x, (1) whereW is a column-full-rank projection matrix. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)",
"title": ""
},
{
"docid": "4a51e4b6ffb2b72b60b30d16f361c84f",
"text": "A lot of effort has been put into researching client-side attacks, including vulnerabilities like cross-site scripting, cross-site request forgery, and more recently, clickjacking. Similar to other client-side attacks, a clickjacking vulnerability can use the browser to exploit weaknesses in cross domain isolation and the same origin policy. It does this by tricking the user to click on something that is actually not what the user perceives they are clicking on. In the most extreme cases, this vulnerability can cause an unsuspecting user to have their account compromised with a single click. Although there are protections available for clickjacking, the web applications implementing these mitigations are far and in between. Additionally, although the possibility for an attacker to frame a page is easy to detect, it is much more difficult to demonstrate or assess the impact of a clickjacking vulnerability than more traditional client-side vectors. Tools do not currently exist to reliably demonstrate clickjacking exploitation, and the rare demonstrations that are done typically use custom JavaScript and HTML for each individual vulnerability. Worse, many times this esoteric code is never made public, leaving everyone to rewrite their own from scratch. BeEF, known as the Browser Exploitation Framework, is a tool designed to help professional penetration testers easily demonstrate the impact of client-side security vulnerabilities. In this paper, we present a plugin module for BeEF which provides a way for penetration testers to easily demonstrate the impact of clickjacking vulnerabilities.",
"title": ""
},
{
"docid": "0d774f86bb45f2e3e04814dd84cb4490",
"text": "Crop yield estimation is an important task in apple orchard management. The current manual sampling-based yield estimation is time-consuming, labor-intensive and inaccurate. To deal with this challenge, we develop and deploy a computer vision system for automated, rapid and accurate yield estimation. The system uses a two-camera stereo rig for image acquisition. It works at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous orchard vehicle is used as the support platform for automated data collection. The system scans the both sides of each tree row in orchards. A computer vision algorithm is developed to detect and register apples from acquired sequential images, and then generate apple counts as crop yield estimation. We deployed the yield estimation system in Washington state in September, 2011. The results show that the developed system works well with both red and green apples in the tall-spindle planting system. The errors of crop yield estimation are -3.2% for a red apple block with about 480 trees, and 1.2% for a green apple block with about 670 trees.",
"title": ""
},
{
"docid": "42452d6df7372cdc9c2cdebd8f0475cb",
"text": "This paper presents SgxPectre Attacks that exploit the recently disclosed CPU bugs to subvert the confidentiality and integrity of SGX enclaves. Particularly, we show that when branch prediction of the enclave code can be influenced by programs outside the enclave, the control flow of the enclave program can be temporarily altered to execute instructions that lead to observable cache-state changes. An adversary observing such changes can learn secrets inside the enclave memory or its internal registers, thus completely defeating the confidentiality guarantee offered by SGX. To demonstrate the practicality of our SgxPectre Attacks, we have systematically explored the possible attack vectors of branch target injection, approaches to win the race condition during enclave’s speculative execution, and techniques to automatically search for code patterns required for launching the attacks. Our study suggests that any enclave program could be vulnerable to SgxPectre Attacks since the desired code patterns are available in most SGX runtimes (e.g., Intel SGX SDK, Rust-SGX, and Graphene-SGX). Most importantly, we have applied SgxPectre Attacks to steal seal keys and attestation keys from Intel signed quoting enclaves. The seal key can be used to decrypt sealed storage outside the enclaves and forge valid sealed data; the attestation key can be used to forge attestation signatures. For these reasons, SgxPectre Attacks practically defeat SGX’s security protection. This paper also systematically evaluates Intel’s existing countermeasures against SgxPectre Attacks and discusses the security implications.",
"title": ""
},
{
"docid": "c72c0db4ba332ca8d4125537db2b110b",
"text": "This paper proposes a sinogram consistency learning method to deal with beam-hardening related artifacts in polychromatic computerized tomography (CT). The presence of highly attenuating materials in the scan field causes an inconsistent sinogram, that does not match the range space of the Radon transform. When the mismatched data are entered into the range space during CT reconstruction, streaking and shading artifacts are generated owing to the inherent nature of the inverse Radon transform. The proposed learning method aims to repair inconsistent sinograms by removing the primary metal-induced beam-hardening factors along the metal trace in the sinogram. Taking account of the fundamental difficulty in obtaining sufficient training data in a medical environment, the learning method is designed to use simulated training data. We use a patient-type specific learning model to simplify the learning process. The quality of sinogram repair was established through data inconsistency-evaluation and acceptance checking, which were conducted using a specially designed inconsistencyevaluation function that identifies the degree and structure of mismatch in terms of projection angles. The results show that our method successfully corrects sinogram inconsistency by extracting beam-hardening sources by means of deep learning.",
"title": ""
},
{
"docid": "7fafda966819bb780b8b2b6ada4cc468",
"text": "Acne inversa (AI) is a chronic and recurrent inflammatory skin disease. It occurs in intertriginous areas of the skin and causes pain, drainage, malodor and scar formation. While supposedly caused by an autoimmune reaction, bacterial superinfection is a secondary event in the disease process. A unique case of a 43-year-old male patient suffering from a recurring AI lesion in the left axilla was retrospectively analysed. A swab revealed Actinomyces neuii as the only agent growing in the lesion. The patient was then treated with Amoxicillin/Clavulanic Acid 3 × 1 g until he was cleared for surgical excision. The intraoperative swab was negative for A. neuii. Antibiotics were prescribed for another 4 weeks and the patient has remained relapse free for more than 12 months now. Primary cutaneous Actinomycosis is a rare entity and the combination of AI and Actinomycosis has never been reported before. Failure to detect superinfections of AI lesions with slow-growing pathogens like Actinomyces spp. might contribute to high recurrence rates after immunosuppressive therapy of AI. The present case underlines the potentially multifactorial pathogenesis of the disease and the importance of considering and treating potential infections before initiating immunosuppressive regimens for AI patients.",
"title": ""
},
{
"docid": "d805dc116db48b644b18e409dda3976e",
"text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.",
"title": ""
},
{
"docid": "fe13ddb78243e3bbb03917be0752872e",
"text": "One of the powerful applications of Booiean expression is to allow users to extract relevant information from a database. Unfortunately, previous research has shown that users have difficulty specifying Boolean queries. In an attempt to overcome this limitation, a graphical Filter/Flow representation of Boolean queries was designed to provide users with an interface that visually conveys the meaning of the Booiean operators (AND, OR, and NOT). This was accomplished by impiementing a graphical interface prototype that uses the metaphor of water flowing through filters. Twenty subjects having no experience with Boolean logic participated in an experiment comparing the Booiean operations represented in the Filter/Flow interface with a text-oniy SQL interface. The subjects independently performed five comprehension tasks and five composition tasks in each of the interfaces. A significant difference (p < 0.05) in the total number of correct queries in each of the comprehension and composition tasks was found favoring Filter/Flow.",
"title": ""
},
{
"docid": "3640d49e4782d8384ff831f0ba4de861",
"text": "This paper considers a feedback control technique for cable suspended robots under input constraints, using control Lyapunov functions (CLF). The motivation for this work is to develop an explicit feedback control law for cable robots to asymptotically stabilize it to a goal point with positive input constraints. The main contributions of this paper are as follows: (i) proposal for a CLF candidate for a cable robot, (ii) a CLF based positive controllers for multiple inputs. An example of a three degrees-of-freedom cable suspended robot is presented to illustrate the proposed methods",
"title": ""
},
{
"docid": "34b7073f947888694053cb421544cb37",
"text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"title": ""
},
{
"docid": "9a83cbd55a06d72603fd0297450c4f0f",
"text": "A heuristic algorithm is developed for the prediction of indoor coverage. Measurements on one floor of an office building are performed to investigate propagation characteristics and validations with very limited additional tuning are performed on another floor of the same building and in three other buildings. The prediction method relies on the free-space loss model for every environment, this way intending to reduce the dependency of the model on the environment upon which the model is based, as is the case with many other models. The applicability of the algorithm to a wireless testbed network with fixed WiFi 802.11b/g nodes is discussed based on a site survey. The prediction algorithm can easily be implemented in network planning algorithms, as will be illustrated with a network reduction and a network optimization algorithm. We aim to provide an physically intuitive, yet accurate prediction of the path loss for different building types.",
"title": ""
},
{
"docid": "a026cb81bddfa946159d02b5bb2e341d",
"text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.",
"title": ""
},
{
"docid": "81349ac7f7a4011ccad32e5c2b392533",
"text": "In this literature a new design of printed antipodal UWB vivaldi antenna is proposed. The design is further modified for acquiring notch characteristics in the WLAN band and high front to backlobe ratio (F/B). The modifications are done on the ground plane of the antenna. Previous literatures have shown that the incorporation of planar meta-material structures on the CPW plane along the feed can produce notch characteristics. Here, a novel concept is introduced regarding antipodal vivaldi antenna. In the ground plane of the antenna, square ring resonator (SRR) structure slot and circular ring resonator (CRR) structure slot are cut to produce the notch characteristic on the WLAN band. The designed antenna covers a bandwidth of 6.8 GHz (2.7 GHz–9.5 GHz) and it can be useful for a large range of wireless applications like satellite communication applications and biomedical applications where directional radiation characteristic is needed. The designed antenna shows better impedance matching in the above said band. A parametric study is also performed on the antenna design to optimize the performance of the antenna. The size of the antenna is 40×44×1.57 mm3. It is designed and simulated using HFSS. The presented prototype offers well directive radiation characteristics, good gain and efficiency.",
"title": ""
},
{
"docid": "c4595a97ae252e2191e52af3466c7aa4",
"text": "The openness and extensibility of Android have made it a popular platform for mobile devices and a strong candidate to drive the Internet-of-Things. Unfortunately, these properties also leave Android vulnerable, attracting attacks for profit or fun. To mitigate these threats, numerous issue-specific solutions have been proposed. With the increasing number and complexity of security problems and solutions, we believe this is the right moment to step back and systematically re-evaluate the Android security architecture and security practices in the ecosystem. We organize the most recent security research on the Android platform into two categories: the software stack and the ecosystem. For each category, we provide a comprehensive narrative of the problem space, highlight the limitations of the proposed solutions, and identify open problems for future research. Based on our collection of knowledge, we envision a blueprint for engineering a secure, next-generation Android ecosystem.",
"title": ""
},
{
"docid": "f0a82f428ac508351ffa7b97bb909b60",
"text": "Automated Teller Machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, and cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. This paper provides a general review on studies, efforts and development in ATMs location and cash management problem. Keywords—ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs.",
"title": ""
},
{
"docid": "08c0561471f8334e9b2a3aa70d12a9a4",
"text": "Increasing interest in JSON data has created a need for its efficient processing. Although JSON is a simple data exchange format, its querying is not always effective, especially in the case of large repositories of data. This work aims to integrate the JSONiq extension to the XQuery language specification into an existing query processor (Apache VXQuery) to enable it to query JSON data in parallel. VXQuery is built on top of Hyracks (a framework that generates parallel jobs) and Algebricks (a language-agnostic query algebra toolbox) and can process data on the fly, in contrast to other well-known systems which need to load data first. Thus, the extra cost of data loading is eliminated. In this paper, we implement three categories of rewrite rules which exploit the features of the above platforms to efficiently handle path expressions along with introducing intra-query parallelism. We evaluate our implementation using a large (803GB) dataset of sensor readings. Our results show that the proposed rewrite rules lead to efficient and scalable parallel processing of JSON data.",
"title": ""
},
{
"docid": "66255dc6c741737b3576e7ddefec96ce",
"text": "Neural Machine Translation (NMT) with source side attention have achieved remarkable performance. however, there has been little work exploring to attend to the target side which can potentially enhance the memory capbility of NMT. We reformulate a Decoding-History Enhanced Attention mechanism (DHEA) to render NMT model better at selecting both source side and target side information. DHEA enables a dynamic control on the ratios at which source and target contexts contribute to the generation of target words, offering a way to weakly induce structure relations among both source and target tokens. It also allows training errors to be directly back-propagated through short-cut connections and effectively alleviates the gradient vanishing problem. The empirical study on Chinese-English translation shows that our model with proper configuration can improve by 0.9 BLEU upon Transformer and achieve the best reported results in the same dataset. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art NMT systems.",
"title": ""
},
{
"docid": "0900c863ca8eb73200aa5ee7b777b598",
"text": "Robust query optimization becomes illusory in the presence of correlated predicates or user-defined functions. Occasionally, the query optimizer will choose join orders whose execution time is by many orders of magnitude higher than necessary. We present SkinnerDB, a novel database management system that is designed from the ground up for reliable optimization and robust performance. SkinnerDB implements several adaptive query processing strategies based on reinforcement learning. We divide the execution of a query into small time periods in which different join orders are executed. Thereby, we converge to optimal join orders with regret bounds, meaning that the expected difference between actual execution time and time for an optimal join order is bounded. To the best of our knowledge, our execution strategies are the first to provide comparable formal guarantees. SkinnerDB can be used as a layer on top of any existing database management system. We use optimizer hints to force existing systems to try out different join orders, carefully restricting execution time per join order and data batch via timeouts. We choose timeouts according to an iterative scheme that balances execution time over different timeouts to guarantee bounded regret. Alternatively, SkinnerDB can be used as a standalone, featuring an execution engine that is tailored to the requirements of join order learning. In particular, we use a specialized multi-way join algorithm and a concise tuple representation to facilitate fast switches between join orders. In our demonstration, we let participants experiment with different query types and databases. We visualize the learning process and compare against baselines. PVLDB Reference Format: Immanuel Trummer, Samuel Moseley, Deepak Maram, Saehan Jo, Joseph Antonakakis. SkinnerDB: Regret-Bounded Query Evaluation via Reinforcement Learning. PVLDB, 11 (12): 2074 2077, 2018. DOI: https://doi.org/10.14778/3229863.3236263 This work is licensed under the Creative Commons Attribution-NonCommercialNoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For any use beyond those covered by this license, obtain permission by emailing info@vldb.org. Proceedings of the VLDB Endowment, Vol. 11, No. 12 Copyright 2018 VLDB Endowment 2150-8097/18/8. DOI: https://doi.org/10.14778/3229863.3236263",
"title": ""
},
{
"docid": "8c60d78e9c4db8a457c7555393089f7c",
"text": "Artificially structured metamaterials have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities, including the cloak of invisibility based on coordinate transformation. Unlike other cloaking approaches4–6, which are typically limited to subwavelength objects, the transformation method allows the design of cloaking devices that render a macroscopic object invisible. In addition, the design is not sensitive to the object that is being cloaked. The first experimental demonstration of such a cloak at microwave frequencies was recently reported7. We note, however, that that design cannot be implemented for an optical cloak, which is certainly of particular interest because optical frequencies are where the word ‘invisibility’ is conventionally defined. Here we present the design of a non-magnetic cloak operating at optical frequencies. The principle and structure of the proposed cylindrical cloak are analysed, and the general recipe for the implementation of such a device is provided. The coordinate transformation used in the proposed nonmagnetic optical cloak of cylindrical geometry is similar to that in ref. 7, by which a cylindrical region r , b is compressed into a concentric cylindrical shell a , r , b as shown in Fig. 1a. This transformation results in the following requirements for anisotropic permittivity and permeability in the cloaking shell:",
"title": ""
},
{
"docid": "b992e02ee3366d048bbb4c30a2bf822c",
"text": "Structured graphics models such as Scalable Vector Graphics (SVG) enable designers to create visually rich graphics for user interfaces. Unfortunately current programming tools make it difficult to implement advanced interaction techniques for these interfaces. This paper presents the Hierarchical State Machine Toolkit (HsmTk), a toolkit targeting the development of rich interactions. The key aspect of the toolkit is to consider interactions as first-class objects and to specify them with hierarchical state machines. This approach makes the resulting behaviors self-contained, easy to reuse and easy to modify. Interactions can be attached to graphical elements without knowing their detailed structure, supporting the parallel refinement of the graphics and the interaction.",
"title": ""
}
] |
scidocsrr
|
971cb2274274ce20c05860d504ea2c05
|
A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition
|
[
{
"docid": "c0890c01e51ddedf881cd3d110efa6e2",
"text": "A residual networks family with hundreds or even thousands of layers dominates major image recognition tasks, but building a network by simply stacking residual blocks inevitably limits its optimization ability. This paper proposes a novel residual network architecture, residual networks of residual networks (RoR), to dig the optimization ability of residual networks. RoR substitutes optimizing residual mapping of residual mapping for optimizing original residual mapping. In particular, RoR adds levelwise shortcut connections upon original residual networks to promote the learning capability of residual networks. More importantly, RoR can be applied to various kinds of residual networks (ResNets, Pre-ResNets, and WRN) and significantly boost their performance. Our experiments demonstrate the effectiveness and versatility of RoR, where it achieves the best performance in all residual-network-like structures. Our RoR-3-WRN58-4 + SD models achieve new state-of-the-art results on CIFAR-10, CIFAR-100, and SVHN, with the test errors of 3.77%, 19.73%, and 1.59%, respectively. RoR-3 models also achieve state-of-the-art results compared with ResNets on the ImageNet data set.",
"title": ""
}
] |
[
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "6f4bbe759d858bc2c0e9ab0df899d785",
"text": "Computer aided diagnosis of breast cancers often relies on automatic image analysis of histopathology images. The automatic region segmentation in breast cancer is challenging due to: i) large regional variations, and ii) high computational costs of pixel-wise segmentation. Deep convolutional neural network (CNN) is proven to be an effective method for image recognition and classification. However, it is often computationally expensive. In this paper, we propose to apply a fast scanning deep convolutional neural network (fCNN) to pixel-wise region segmentation. The fCNN removes the redundant computations in the original CNN without sacrificing its performance. In our experiment it takes only 2.3 seconds to segment an image with size 1000 × 1000. The comparison experiments show that the proposed system outperforms both the LBP feature-based and texton-based pixel-wise methods.",
"title": ""
},
{
"docid": "de0d2808f949723f1c0ee8e87052f889",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "999331062a055e820ad7db50e6c0f312",
"text": "OBJECTIVE: To develop a valid, reliable instrument to measure the functional health literacy of patients. DESIGN: The Test of Functional Health Literacy in Adults (TOFHLA) was developed using actual hospital materials. The TOFHLA consists of a 50-item reading comprehension and 17-item numerical ability test, taking up to 22 minutes to administer. The TOFHLA, the Wide Range Achievement Test-Revised (WRAT-R), and the Rapid Estimate of Adult Literacy in Medicine (REALM) were administered for comparison. A Spanish version was also developed (TOFHLA-S). SETTING: Outpatient settings in two public teaching hospitals. PATIENTS: 256 English- and 249 Spanish-speaking patients were approached. 78% of the English- and 82% of the Spanish-speaking patients gave informed consent, completed a demographic survey, and took the TOFHLA or TOFHLA-S. RESULTS: The TOFHLA showed good correlation with the WRAT-R and the REALM (correlation coefficients 0.74 and 0.84, respectively). Only 52% of the English speakers completed more than 80% of the questions correctly. 15% of the patients could not read and interpret a prescription bottle with instructions to take one pill by mouth four times daily, 37% did not understand instructions to take a medication on an empty stomach, and 48% could not determine whether they were eligible for free care. CONCLUSIONS: The TOFHLA is a valid, reliable indicator of patient ability to read health-related materials. Data suggest that a high proportion of patients cannot perform basic reading tasks. Additional work is needed to determine the prevalence of functional health illiteracy and its effect on the health care experience.",
"title": ""
},
{
"docid": "70f370cd540a1386e7ce824f7a632746",
"text": "As deep learning models are applied to increasingly diverse and complex problems, a key bottleneck is gathering enough highquality training labels tailored to each task. Users therefore turn to weak supervision, relying on imperfect sources of labels like user-defined heuristics and pattern matching. Unfortunately, with weak supervision, users have to design different labeling sources for each task. This process can be both time consuming and expensive: domain experts often perform repetitive steps like guessing optimal numerical thresholds and designing informative text patterns. To address these challenges, we present Reef, a system to automatically generate heuristics using a small labeled dataset to assign training labels to a large, unlabeled dataset in the weak supervision setting. Reef generates heuristics that each labels only the subset of the data it is accurate for, and iteratively repeats this process until the heuristics together label a large portion of the unlabeled data. We also develop a statistical measure that guarantees the iterative process will automatically terminate before it degrades training label quality. Compared to the best known user-defined heuristics developed over several days, Reef automatically generates heuristics in under five minutes and performs up to 9.74 F1 points better. In collaborations with users at several large corporations, research labs, Stanford Hospital and Clinics, and on open source text and image datasets, Reef outperforms other automated approaches like semi-supervised learning by up to 14.35 F1 points.",
"title": ""
},
{
"docid": "68f8d261308714abd7e2655edd66d18a",
"text": "In this paper, we present a solution to Moments in Time (MIT) [1] Challenge. Current methods for trimmed video recognition often utilize inflated 3D (I3D) [2] to capture spatial-temporal features. First, we explore off-the-shelf structures like non-local [3], I3D, TRN [4] and their variants. After a plenty of experiments, we find that for MIT, a strong 2D convolution backbone following temporal relation network performs better than I3D network. We then add attention module based on TRN to learn a weight for each relation so that the model can capture the important moment better. We also design uniform sampling over videos and relation restriction policy to further enhance testing performance.",
"title": ""
},
{
"docid": "f4d1b3be8a81c50bd8d6444d1d2fc65f",
"text": "The ability to exchange opinions and experiences online is known as online word of mouth (WOM) and has been shown in the literature to have the potential to impact e-commerce sales. The purpose of this paper is to expand previous findings by empirically evaluating the impact of online WOM attributes and other related factors (e.g. product views, promotion, and category) on e-commerce sales using real data from a multi-product retail e-commerce firm. Research has previously shown that the introduction of online WOM on a retail e-commerce site can positively impact product sales. We propose and validate a conceptual model of online WOM and its impact on product sales and the impact of moderator variables such as promotion, product category and product views. It is our conclusion that previous research on online WOM has been limited as our research empirically demonstrates the conclusion that it is the interaction of product category, volume and product views, and the interaction of product views and product category which are statistically significant in explaining changes in unit product sales. Pure increase in volume or number of reviewer comments has no significant effect on sales. These conclusions have critical implications for the practical use of online WOM in e-commerce and for internet marketing.",
"title": ""
},
{
"docid": "bb128a330bfb654dab0c06269b91d68a",
"text": "Most Chinese texts are inputted with keyboard via two input methods: Pinyin and Wubi, especially by Pinyin input method. In this paper, this users' habitation is used to find the spelling errors automatically. We first train a Chinese character form n-gram language model on a large scale Chinese corpus in the traditional way. In order to improve this character based model, we transform the whole corpus into Pinyin to obtain Pinyin based language model. Fatherly, the tone is considered to get the third model. Integrating these three models, we improve the performance of checking spelling error system. Experimental results demonstrate the effeteness of our model.",
"title": ""
},
{
"docid": "8d092dfa88ba239cf66e5be35fcbfbcc",
"text": "We present VideoWhisper, a novel approach for unsupervised video representation learning. Based on the observation that the frame sequence encodes the temporal dynamics of a video (e.g., object movement and event evolution), we treat the frame sequential order as a self-supervision to learn video representations. Unlike other unsupervised video feature learning methods based on frame-level feature reconstruction that is sensitive to visual variance, VideoWhisper is driven by a novel video “sequence-to-whisper” learning strategy. Specifically, for each video sequence, we use a prelearned visual dictionary to generate a sequence of high-level semantics, dubbed “whisper,” which can be considered as the language describing the video dynamics. In this way, we model VideoWhisper as an end-to-end sequence-to-sequence learning model using attention-based recurrent neural networks. This model is trained to predict the whisper sequence and hence it is able to learn the temporal structure of videos. We propose two ways to generate video representation from the model. Through extensive experiments on two real-world video datasets, we demonstrate that video representation learned by V ideoWhisper is effective to boost fundamental multimedia applications such as video retrieval and event classification.",
"title": ""
},
{
"docid": "38015405cee6dd933bcc4fb8897aecf5",
"text": "Computers are notoriously insecure, in part because application security policies do not map well onto traditional protection mechanisms such as Unix user accounts or hardware page tables. Recent work has shown that application policies can be expressed in terms of information flow restrictions and enforced in an OS kernel, providing a strong assurance of security. This paper shows that enforcement of these policies can be pushed largely into the processor itself, by using tagged memory support, which can provide stronger security guarantees by enforcing application security even if the OS kernel is compromised. We present the Loki tagged memory architecture, along with a novel operating system structure that takes advantage of tagged memory to enforce application security policies in hardware. We built a full-system prototype of Loki by modifying a synthesizable SPARC core, mapping it to an FPGA board, and porting HiStar, a Unix-like operating system, to run on it. One result is that Loki allows HiStar, an OS already designed to have a small trusted kernel, to further reduce the amount of trusted code by a factor of two, and to enforce security despite kernel compromises. Using various workloads, we also demonstrate that HiStar running on Loki incurs a low performance overhead.",
"title": ""
},
{
"docid": "5f70d96454e4a6b8d2ce63bc73c0765f",
"text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.",
"title": ""
},
{
"docid": "ddb46db8f8316ffd234006fa19ad628a",
"text": "Lexical resource alignment has been an active field of research over the last decade. However, prior methods for aligning lexical resources have been either specific to a particular pair of resources, or heavily dependent on the availability of hand-crafted alignment data for the pair of resources to be aligned. Here we present a unified approach that can be applied to an arbitrary pair of lexical resources, including machine-readable dictionaries with no network structure. Our approach leverages a similarity measure that enables the structural comparison of senses across lexical resources, achieving state-of-the-art performance on the task of aligning WordNet to three different collaborative resources: Wikipedia, Wiktionary and OmegaWiki.",
"title": ""
},
{
"docid": "09ea60a655f4c172a4e3a9851e7faeeb",
"text": "Software project management leads to success and failure of software project. Software project management include planning, managing and controlling different knowledge areas such as scope, time, cost, quality, risk, human resource, stakeholders, and procurement management. The key issue of software project management is to manage scope, time and cost for a project. Requirement of user changes throughout life of project, and those effect time and cost of project and other knowledge areas as well. Agile methodology is framework for software development with reduced risk. Agile is iterative software development methodology that focuses on frequent and faster delivery, and entertain customer changes. There is positive impact on development cost, time and productivity by switching from traditional waterfall model to agile model. This paper examines that how agile methodology affect different aspect of software project management. Our literature review proposes that agile methodology helps in software project management that leads to the success of software.",
"title": ""
},
{
"docid": "16e03a9071e84f20236aa84dca70a56c",
"text": "In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.",
"title": ""
},
{
"docid": "09419012657ed9d734d1cbcd878461f5",
"text": "This study was aimed to explore the effect of W-plasty combined Botox-A injection in improving appearance of scar.According to the inclusive and exclusive criteria, patients received W-plasty combined Botox-A injection (study group) or traditional (control group) scar repairment were enrolled in this study. After surgery, a follow-up ranged from 1 to 2 years was conducted. The effectiveness of surgery was assessed by visual analogue scale (VAS).A total of 38 patients were enrolled in this study, including 21 cases in the study group and 17 cases in the control group. There were no significant difference were identified in age (t = 0.339, P = .736), gender ratio (χ = 0.003, P = .955) and scar forming reason (χ = 0.391, P = .822) between 2 groups. After treatment, the VAS score in the study group was significantly higher than that in the control group (P < .001).W-plasty combined Botox-A injection can significantly improve the appearance of sunk scar on the face.",
"title": ""
},
{
"docid": "799904b20f1174f01c0d2dd87c57e097",
"text": "ix",
"title": ""
},
{
"docid": "f942b12efef1e6497f3906fc7c57011b",
"text": "The increasing proliferation of digital technologies is transforming economies in many ways. This is particularly true in consumer-facing industries where the emergence of digital services is enabling novel value propositions, closer consumer relationships and greater automation of consumer-facing business processes.2 These digital services are providing value-creating consumer interactions. For instance, an Italian auto insurer uses a telematics device installed in customers’ vehicles to capture driving behavior and uses this data to create novel value propositions via personalized insurance services.3 The Finnish airline Finnair harnesses Facebook as a platform to create a customer community and a sense of collective identity with the company.4 The City of Boston introduced an iPhone app that senses potholes on city roads and allows citizens to contribute to road management in a highly automated fashion.5",
"title": ""
},
{
"docid": "59d39dd0a5535be81c695a7fbd4005c1",
"text": "Over the last decade, accumulating evidence has suggested a causative link between mitochondrial dysfunction and major phenotypes associated with aging. Somatic mitochondrial DNA (mtDNA) mutations and respiratory chain dysfunction accompany normal aging, but the first direct experimental evidence that increased mtDNA mutation levels contribute to progeroid phenotypes came from the mtDNA mutator mouse. Recent evidence suggests that increases in aging-associated mtDNA mutations are not caused by damage accumulation, but rather are due to clonal expansion of mtDNA replication errors that occur during development. Here we discuss the caveats of the traditional mitochondrial free radical theory of aging and highlight other possible mechanisms, including insulin/IGF-1 signaling (IIS) and the target of rapamycin pathways, that underlie the central role of mitochondria in the aging process.",
"title": ""
},
{
"docid": "7895810c92a80b6d5fd8b902241d66c9",
"text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.",
"title": ""
},
{
"docid": "ebb40b1e228c9f95ce2ea9229a16853c",
"text": "Continuum manipulators attract a lot of interests due to their advantageous properties, such as distal dexterity, design compactness, intrinsic compliance for safe interaction with unstructured environments. However, these manipulators sometimes suffer from the lack of enough stiffness while applied in surgical robotic systems. This paper presents an experimental kinestatic comparison between three continuum manipulators, aiming at revealing how structural variations could alter the manipulators' stiffness properties. These variations not only include modifying the arrangements of elastic components, but also include integrating a passive rigid kinematic chain to form a hybrid continuum-rigid manipulator. Results of this paper could contribute to the development of design guidelines for realizing desired stiffness properties of a continuum or hybrid manipulator.",
"title": ""
}
] |
scidocsrr
|
a7bb3cbbe54b7a294c3d1d2a2338f370
|
Graph Matching : Theoretical Foundations , Algorithms , and Applications
|
[
{
"docid": "f4da31cf831dd3db5f3063c5ea1fca62",
"text": "SUMMARY Backtrack algorithms are applicable to a wide variety of problems. An efficient but readable version of such an algorithm is presented and its use in the problem of finding the maximal common subgraph of two graphs is described. Techniques available in this application area for ordering and pruning the backtrack search are discussed. This algorithm has been used successfully as a component of a program for analysing chemical reactions and enumerating the bond changes which have taken place.",
"title": ""
},
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
}
] |
[
{
"docid": "c8edb6b8ed8176368faf591161718b95",
"text": "A new 4-group model of attachment styles in adulthood is proposed. Four prototypic attachment patterns are defined using combinations of a person's self-image (positive or negative) and image of others (positive or negative). In Study 1, an interview was developed to yield continuous and categorical ratings of the 4 attachment styles. Intercorrelations of the attachment ratings were consistent with the proposed model. Attachment ratings were validated by self-report measures of self-concept and interpersonal functioning. Each style was associated with a distinct profile of interpersonal problems, according to both self- and friend-reports. In Study 2, attachment styles within the family of origin and with peers were assessed independently. Results of Study 1 were replicated. The proposed model was shown to be applicable to representations of family relations; Ss' attachment styles with peers were correlated with family attachment ratings.",
"title": ""
},
{
"docid": "5c8eeecbd286273e319c860626b2ecf2",
"text": "Online user-generated content in various social media websites, such as consumer experiences, user feedback, and product reviews, has increasingly become the primary information source for both consumers and businesses. In this study, we aim to look beyond the quantitative summary and unidimensional interpretation of online user reviews to provide a more comprehensive view of online user-generated content. Moreover, we would like to extend the current literature to the more customer-driven service industries, particularly the hotel industry. We obtain a unique and extensive dataset of online user reviews for hotels across various review sites and over long time periods. We use the sentiment analysis technique to decompose user reviews into different dimensions to measure hotel service quality and performance based on the SERVPERF model. Those dimensions are then incorporated into econometrics models to examine their effect in shaping users’ overall evaluation and content-generating behavior. The results suggest that different dimensions of user reviews have significantly different effects in forming user evaluation and driving content generation. This paper demonstrates the importance of using textual data to measure consumers’ relative preferences for service quality and evaluate service performance.",
"title": ""
},
{
"docid": "c7b58a4ebb65607d1545d3bc506c2fed",
"text": "The goal of this study was to examine the relationship of self-efficacy, social support, and coping strategies with stress levels of university students. Seventy-five Education students completed four questionnaires assessing these variables. Significant correlations were found for stress with total number of coping strategies and the use of avoidance-focused coping strategies. As well, there was a significant correlation between social support from friends and emotion-focused coping strategies. Gender differences were found, with women reporting more social support from friends than men. Implications of these results for counselling university students are discussed.",
"title": ""
},
{
"docid": "e18b565bddfc86c0ab3ef5ad190bdf06",
"text": "Human activities observed from visual sensors often give rise to a sequence of smoothly varying features. In many cases, the space of features can be formally defined as a manifold, where the action becomes a trajectory on the manifold. Such trajectories are high dimensional in addition to being non-linear, which can severely limit computations on them. We also argue that by their nature, human actions themselves lie on a much lower dimensional manifold compared to the high dimensional feature space. Learning an accurate low dimensional embedding for actions could have a huge impact in the areas of efficient search and retrieval, visualization, learning, and recognition. Traditional manifold learning addresses this problem for static points in ℝn, but its extension to trajectories on Riemannian manifolds is non-trivial and has remained unexplored. The challenge arises due to the inherent non-linearity, and temporal variability that can significantly distort the distance metric between trajectories. To address these issues we use the transport square-root velocity function (TSRVF) space, a recently proposed representation that provides a metric which has favorable theoretical properties such as invariance to group action. We propose to learn the low dimensional embedding with a manifold functional variant of principal component analysis (mfPCA). We show that mf-PCA effectively models the manifold trajectories in several applications such as action recognition, clustering and diverse sequence sampling while reducing the dimensionality by a factor of ~ 250×. The mfPCA features can also be reconstructed back to the original manifold to allow for easy visualization of the latent variable space.",
"title": ""
},
{
"docid": "17c49edf5842fb918a3bd4310d910988",
"text": "In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.",
"title": ""
},
{
"docid": "04a85672df9da82f7e5da5b8b25c9481",
"text": "This study investigated long-term effects of training on postural control using the model of deficits in activation of transversus abdominis (TrA) in people with recurrent low back pain (LBP). Nine volunteers with LBP attended four sessions for assessment and/or training (initial, two weeks, four weeks and six months). Training of repeated isolated voluntary TrA contractions were performed at the initial and two-week session with feedback from real-time ultrasound imaging. Home program involved training twice daily for four weeks. Electromyographic activity (EMG) of trunk and deltoid muscles was recorded with surface and fine-wire electrodes. Rapid arm movement and walking were performed at each session, and immediately after training on the first two sessions. Onset of trunk muscle activation relative to prime mover deltoid during arm movements, and the coefficient of variation (CV) of EMG during averaged gait cycle were calculated. Over four weeks of training, onset of TrA EMG was earlier during arm movements and CV of TrA EMG was reduced (consistent with more sustained EMG activity). Changes were retained at six months follow-up (p<0.05). These results show persistence of motor control changes following training and demonstrate that this training approach leads to motor learning of automatic postural control strategies.",
"title": ""
},
{
"docid": "bc273a7f7f4801400809a9f860830edb",
"text": "Concolic testing has been very successful in automatically generating test inputs for programs. However one of its major limitations is path-explosion that limits the generation of high coverage inputs. Since its inception several ideas have been proposed to attack this problem from various angles: defining search heuristics that increase coverage, caching of function summaries, pruning of paths using static/dynamic information etc. \n We propose a new and complementary method based on interpolation, that greatly mitigates path-explosion by subsuming paths that can be guaranteed to not hit a bug. We discuss new challenges in using interpolation that arise specifically in the context of concolic testing. We experimentally evaluate our method with different search heuristics using Crest, a publicly available concolic tester.",
"title": ""
},
{
"docid": "b88a79221efb5afc717cb2f97761271d",
"text": "BACKGROUND\nLymphangitic streaking, characterized by linear erythema on the skin, is most commonly observed in the setting of bacterial infection. However, a number of nonbacterial causes can result in lymphangitic streaking. We sought to elucidate the nonbacterial causes of lymphangitic streaking that may mimic bacterial infection to broaden clinicians' differential diagnosis for patients presenting with lymphangitic streaking.\n\n\nMETHODS\nWe performed a review of the literature, including all available reports pertaining to nonbacterial causes of lymphangitic streaking.\n\n\nRESULTS\nVarious nonbacterial causes can result in lymphangitic streaking, including viral and fungal infections, insect or spider bites, and iatrogenic etiologies.\n\n\nCONCLUSION\nAwareness of potential nonbacterial causes of superficial lymphangitis is important to avoid misdiagnosis and delay the administration of appropriate care.",
"title": ""
},
{
"docid": "d77ec9805763e9afd9a229f534338fde",
"text": "The purpose of the study was to investigate the effects of teachers’ demographic variables on implementation of Information Communication Technology in public secondary schools in Nyeri Central district, Kenya. The dependent variable was implementation of ICT and the independent variables were teachers’ teaching experience and training. The research design used was descriptive survey design. The target population was 275 teachers working in 15 public secondary schools in Nyeri Central district. The sampling design was stratified random sampling and sample size was 82 teachers. The study targeted 15 principals of the schools in Nyeri Central district. The data collection tools were questionnaires, interview schedule and observation schedule. Data were analyzed quantitatively and qualitatively. Teachers’ training in ICT and teaching experience are not consistent in affecting ICT implementation. Many schools especially in rural areas had not embraced ICT mainly because teachers lacked adequate training, had lower levels of education, and had negative attitude towards ICT implementation. This has led to schools facing major challenges in ICT implementation. The researcher recommends that Public secondary schools should find a way to purchase more ICT facilities and support teachers’ training on the use of ICT. The government needs to give more financial support through free education programme and donations to enhance ICT implementation in public secondary schools. The teachers should change their attitude towards the use and implementation of ICT in the schools so as to create information technology culture in all aspects of teaching and learning. Wachiuri Reuben Nguyo",
"title": ""
},
{
"docid": "ebd62f49345f44b9f673b9ceccf9df46",
"text": "MOTIVATION\nWell-annotated gene sets representing the universe of the biological processes are critical for meaningful and insightful interpretation of large-scale genomic data. The Molecular Signatures Database (MSigDB) is one of the most widely used repositories of such sets.\n\n\nRESULTS\nWe report the availability of a new version of the database, MSigDB 3.0, with over 6700 gene sets, a complete revision of the collection of canonical pathways and experimental signatures from publications, enhanced annotations and upgrades to the web site.\n\n\nAVAILABILITY AND IMPLEMENTATION\nMSigDB is freely available for non-commercial use at http://www.broadinstitute.org/msigdb.",
"title": ""
},
{
"docid": "36b609f1c748154f0f6193c6578acec9",
"text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1796abfceaa17dad2e0d4150a8c8a8f3",
"text": "A novel eight-band LTE/WWAN frequency reconfigurable antenna for tablet computer applications is proposed in this communication. With a small dimension of 40 × 12 × 4 mm3, the proposed antenna comprises a loop feeding strip and a shorting strip in which a single-pole four-throw RF switch is embedded. The RF switch is used to change the resonant modes of lower band among four different working states, so that the antenna can provide a multiband operation of LTE700/GSM850 /900/1800/1900/UMTS2100/LTE2300/2500 with return loss better than 6 dB. Reasonably good radiating efficiency and antenna gain are also achieved for the practical tablet computer.",
"title": ""
},
{
"docid": "e765e634de8b42da8e7b1e43dcc0b8ba",
"text": "Recently, natural language processing applications have become very popular in the industry. Examples of such applications include “semantic” enterprise search engines, document categorizers, speech recognizers and – last but not least – conversational agents, also known as virtual assistants or “chatbots”. The latter in particular are very sought-after in the customer care domain, where the aim is to complement the live agent experience with an artificial intelligence able to help users fulfil a task. In this paper, we discuss the challenges and limitations of industrial chatbot applications, with a particular focus on the “human-in-the-loop” aspect, whereby a cooperation between human and machine takes place in mutual interest. Furthermore, we analyse how the same aspect intervenes in other industrial natural language processing applications.",
"title": ""
},
{
"docid": "4319c8fcb890ba964a701182dc2b3a39",
"text": "Paper provides in depth review of software and project estimation techniques existing in industry and literature, its strengths and weaknesses. Usage, popularity and applicability of such techniques are elaborated. In order to improve estimation accuracy, such knowledge is essential. Many estimation techniques, models, methodologies exists and applicable in different categories of projects. None of them gives 100% accuracy but proper use of them makes estimation process smoother and easier. Organizations should automate estimation procedures, customize available tools and calibrate estimation approaches as per their requirements. Proposed future work is to study factors involved in Software Engineering Approaches (Software Estimation in focus) for Offshore and Outsourced Software Development taking Pakistani IT Industry as a Case Study",
"title": ""
},
{
"docid": "c8478b6104aa725b6c3eb16270e9bd99",
"text": "Industry is the part of an economy that produces material goods which are highly mechanized and automatized. Ever since the beginning of industrialization, technological leaps have led to paradigm shifts which today are ex-post named “industrial revolutions”: in the field of mechanization (the so-called 1st industrial revolution), of the intensive use of electrical energy (the so-called 2nd industrial revolution), and of the widespread digitalization (the so-called 3rd industrial revolution). On the basis of an advanced digitalization within factories, the combination of Internet technologies and future-oriented technologies in the field of “smart” objects (machines and products) seems to result in a new fundamental paradigm shift in industrial production. The vision of future production contains modular and efficient manufacturing systems and characterizes scenarios in which products control their own manufacturing process. This is supposed to realize the manufacturing of individual products in a batch size of one while maintaining the economic conditions of mass production. Tempted by this future expectation, the term “Industry 4.0” was established exante for a planned “4th industrial revolution”, the term being a reminiscence of software versioning. Decisive for the fast spread was the recommendation for implementation to the German Government, which carried the term in its title and was picked up willingly by the Federal Ministry of Education and Research and has become an eponym for a future project in the context of the high-tech strategy 2020. Currently an industrial platform consisting of three well-known industry associations named “Industry 4.0” is contributing to the dispersion of the term. Outside of the German-speaking area, the term is not common. In this paper the term “Industry 4.0” describes a future project that can be defined by two development directions. On the one hand there is a huge applicationpull, which induces a remarkable need for changes due to changing operative framework conditions. Triggers for this are general social, economic, and political changes. Those are in particular: Short development periods: Development periods and innovation periods need to be shortened. High innovation capability is becoming an essential success factor for many enterprises (“time to market”). Individualization on demand: A change from a seller’s into a buyer’s market has been becoming apparent for decades now, which means buyers can define the conditions of the trade. This trend leads to an increasing individualization of products and in extreme cases to individual products. This is also called “batch size one”. Flexibility: Due to the new framework requirements, higher flexibility in product development, especially in production, is necessary. Decentralization: To cope with the specified conditions, faster decisionmaking procedures are necessary. For this, organizational hierarchies need to be reduced. Resource efficiency: Increasing shortage and the related increase of prices for resources as well as social change in the context of ecological aspects require a more intensive focus on sustainability in industrial contexts. The aim is an economic and ecological increase in efficiency. On the other hand, there is an exceptional technology-push in industrial practice. This technology-push has already influenced daily routine in private areas. Buzzwords are Web 2.0, Apps,",
"title": ""
},
{
"docid": "f53e743819b577a5460e17910907fb11",
"text": "The Bitcoin network relies on peer-to-peer broadcast to distribute pending transactions and confirmed blocks. The topology over which this broadcast is distributed affects which nodes have advantages and whether some attacks are feasible. As such, it is particularly important to understand not just which nodes participate in the Bitcoin network, but how they are connected. In this paper, we introduce AddressProbe, a technique that discovers peer-to-peer links in Bitcoin, and apply this to the live topology. To support AddressProbe and other tools, we develop CoinScope, an infrastructure to manage short, but large-scale experiments in Bitcoin. We analyze the measured topology to discover both highdegree nodes and a well connected giant component. Yet, efficient propagation over the Bitcoin backbone does not necessarily result in a transaction being accepted into the block chain. We introduce a “decloaking” method to find influential nodes in the topology that are well connected to a mining pool. Our results find that in contrast to Bitcoin’s idealized vision of spreading mining responsibility to each node, mining pools are prevalent and hidden: roughly 2% of the (influential) nodes represent threequarters of the mining power.",
"title": ""
},
{
"docid": "827493ff47cff1defaeafff2ef180dce",
"text": "We present a static analysis algorithm for detecting security vulnerabilities in PHP, a popular server-side scripting language for building web applications. Our analysis employs a novel three-tier architecture to capture information at decreasing levels of granularity at the intrablock, intraprocedural, and interprocedural level. This architecture enables us to handle dynamic features unique to scripting languages such as dynamic typing and code inclusion, which have not been adequately addressed by previous techniques. We demonstrate the effectiveness of our approach by running our tool on six popular open source PHP code bases and finding 105 previously unknown security vulnerabilities, most of which we believe are remotely exploitable.",
"title": ""
},
{
"docid": "500acf5a68d09d817a1ef5a6759a65de",
"text": "OBJECTIVE\nNon-adherence to DMARDs is common, but little is known about adherence to biologic therapies and its relationship to treatment response. The purpose of this study was to investigate the association between self-reported non-adherence to s.c. anti-TNF therapy and response in individuals with RA.\n\n\nMETHODS\nParticipants about to start s.c. anti-TNF therapy were recruited to a large UK multicentre prospective observational cohort study. Demographic information and disease characteristics were assessed at baseline. Self-reported non-adherence, defined as whether the previous due dose of biologic therapy was reported as not taken on the day agreed with the health care professional, was recorded at 3 and 6 months following the start of therapy. The 28-joint DAS (DAS28) was recorded at baseline and following 3 and 6 months of therapy. Multivariate linear regression was used to examine these relationships.\n\n\nRESULTS\nThree hundred and ninety-two patients with a median disease duration of 7 years [interquartile range (IQR) 3-15] were recruited. Adherence data were available in 286 patients. Of these, 27% reported non-adherence to biologic therapy according to the defined criteria at least once within the first 6-month period. In multivariate linear regression analysis, older age, lower baseline DAS28 and ever non-adherence at either 3 or 6 months from baseline were significantly associated with a poorer DAS28 response at 6 months to anti-TNF therapy.\n\n\nCONCLUSION\nPatients with RA who reported not taking their biologic on the day agreed with their health care professional showed poorer clinical outcomes than their counterparts, emphasizing the need to investigate causes of non-adherence to biologics.",
"title": ""
},
{
"docid": "a9b5b2cde37cb2403660d435a305dad1",
"text": "Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-toend neural architectures for QA. Increasingly complex systems have been conceived without comparison to a simpler neural baseline system that would justify their complexity. In this work, we propose a simple heuristic that guided the development of FastQA, an efficient endto-end neural model for question answering that is very competitive with existing models. We further demonstrate, that an extended version (FastQAExt) achieves state-of-the-art results on recent benchmark datasets, namely SQuAD, NewsQA and MsMARCO, outperforming most existing models. However, we show that increasing the complexity of FastQA to FastQAExt does not yield any systematic improvements. We argue that the same holds true for most existing systems that are similar to FastQAExt. A manual analysis reveals that our proposed heuristic explains most predictions of our model, which indicates that modeling a simple heuristic is enough to achieve strong performance on extractive QA datasets. The overall strong performance of FastQA puts results of existing, more complex models into perspective.",
"title": ""
},
{
"docid": "34084a12a4437c3d2126b06ffbf8c734",
"text": "OBJECTIVE\nThe psychopathy checklist-revised (PCL-R; Hare, 1991, 2003) is often used to assess risk of violence, perhaps based on the assumption that it captures emotionally detached individuals who are driven to prey upon others. This study is designed to assess the relation between (a) core interpersonal and affective traits of psychopathy and impulsive antisociality on the one hand and (b) the risk of future violence and patterns of motivation for past violence on the other.\n\n\nMETHOD\nA research team reliably assessed a sample of 158 male offenders for psychopathy, using both the interview-based PCL-R and the self-report psychopathic personality inventory (PPI: Lilienfeld & Andrews, 1996). Then, a second independent research team assessed offenders' lifetime patterns of violence and their motivation. After these baseline assessments, offenders were followed in prison or the community for up to 1 year to assess their involvement in 3 different forms of violence. Baseline and follow-up assessments included both interviews and reviews of official records.\n\n\nRESULTS\nFirst, the PPI manifested incremental validity in predicting future violence over the PCL-R (but not vice versa)-and most of its predictive power derived solely from impulsive antisociality. Second, impulsive antisociality-not interpersonal and affective traits specific to psychopathy-were uniquely associated with instrumental lifetime patterns of past violence. The latter psychopathic traits are narrowly associated with deficits in motivation for violence (e.g., lack of fear or lack of provocation).\n\n\nCONCLUSIONS\nThese findings and their consistency with some past research led us to advise against making broad generalizations about the relation between psychopathy and violence.",
"title": ""
}
] |
scidocsrr
|
03b6dca5eb57f80d3b232ce763984443
|
Context-aware Argument Mining and Its Applications in Education
|
[
{
"docid": "3f6a61bf0c3b9c81d24951ed8fa39b04",
"text": "In this paper, we consider argument mining as the task of buil ding a formal representation for an argumentative piece of text. Our goal is to provide a criti cal survey of the literature on both the resulting representations (i.e., argument diagrammin g techniques) and on the various aspects of the automatic analysis process. For representation, we a lso provide a synthesized proposal of a scheme that combines advantages from several of the earlier approaches; in addition, we discuss the relationship between representing argument structure and the rhetorical structure of texts in the sense of Mann and Thompsons (1988) RST. Then, for the argu ment mining problem, we also cover the literature on closely-related tasks that have bee n tackled in Computational Linguistics, because we think that these can contribute to more powerful a rg ment mining systems than the first prototypes that were built in recent years. The paper co ncludes with our suggestions for the major challenges that should be addressed in the field of argu ment mining.",
"title": ""
},
{
"docid": "7723c78b2ff8f9fdc285ee05b482efef",
"text": "We describe our experience in developing a discourse-annotated corpus for community-wide use. Working in the framework of Rhetorical Structure Theory, we were able to create a large annotated resource with very high consistency, using a well-defined methodology and protocol. This resource is made publicly available through the Linguistic Data Consortium to enable researchers to develop empirically grounded, discourse-specific applications.",
"title": ""
},
{
"docid": "80b173cf8dbd0bc31ba8789298bab0fa",
"text": "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.",
"title": ""
}
] |
[
{
"docid": "c9cae26169a89ad8349889b3fd221d32",
"text": "Dense kernel matrices Θ ∈ RN×N obtained from point evaluations of a covariance function G at locations {xi}1≤i≤N arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green’s functions of elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset S ⊂ {1, . . . , N} × {1, . . . , N}, with #S = O(N log(N) log(N/ )), such that the zero fill-in incomplete Cholesky factorisation of Θi,j1(i,j)∈S is an -approximation of Θ. This blockfactorisation can provably be obtained in complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time. The algorithm only needs to know the spatial configuration of the xi and does not require an analytic representation of G. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a solver for elliptic PDE with complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time.",
"title": ""
},
{
"docid": "4ee7762bba9a145f82b6a595d781739d",
"text": "CONTEXT\nAttention-deficit/hyperactivity disorder (ADHD) is the most prevalent psychiatric disorder of childhood. There is considerable evidence that brain dopamine is involved in ADHD, but it is unclear whether dopamine activity is enhanced or depressed.\n\n\nOBJECTIVE\nTo test the hypotheses that striatal dopamine activity is depressed in ADHD and that this contributes to symptoms of inattention.\n\n\nDESIGN\nClinical (ADHD adult) and comparison (healthy control) subjects were scanned with positron emission tomography and raclopride labeled with carbon 11 (D2/D3 receptor radioligand sensitive to competition with endogenous dopamine) after placebo and after intravenous methylphenidate hydrochloride (stimulant that increases extracellular dopamine by blocking dopamine transporters). The difference in [11C]raclopride's specific binding between placebo and methylphenidate was used as marker of dopamine release. Symptoms were quantified using the Conners Adult ADHD Rating Scales.\n\n\nSETTING\nOutpatient setting.\n\n\nPARTICIPANTS\nNineteen adults with ADHD who had never received medication and 24 healthy controls.\n\n\nRESULTS\nWith the placebo, D2/D3 receptor availability in left caudate was lower (P < .05) in subjects with ADHD than in controls. Methylphenidate induced smaller decrements in [11C]raclopride binding in left and right caudate (blunted DA increases) (P < .05) and higher scores on self-reports of \"drug liking\" in ADHD than in control subjects. The blunted response to methylphenidate in caudate was associated with symptoms of inattention (P < .05) and with higher self-reports of drug liking (P < .01). Exploratory analysis using statistical parametric mapping revealed that methylphenidate also decreased [11C]raclopride binding in hippocampus and amygdala and that these decrements were smaller in subjects with ADHD (P < .001).\n\n\nCONCLUSIONS\nThis study reveals depressed dopamine activity in caudate and preliminary evidence in limbic regions in adults with ADHD that was associated with inattention and with enhanced reinforcing responses to intravenous methylphenidate. This suggests that dopamine dysfunction is involved with symptoms of inattention but may also contribute to substance abuse comorbidity in ADHD.",
"title": ""
},
{
"docid": "e754c7c7821703ad298d591a3f7a3105",
"text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.",
"title": ""
},
{
"docid": "0257589dc59f1ddd4ec19a2450e3156f",
"text": "Drawing upon the literatures on beliefs about magical contagion and property transmission, we examined people's belief in a novel mechanism of human-to-human contagion, emotional residue. This is the lay belief that people's emotions leave traces in the physical environment, which can later influence others or be sensed by others. Studies 1-4 demonstrated that Indians are more likely than Americans to endorse a lay theory of emotions as substances that move in and out of the body, and to claim that they can sense emotional residue. However, when the belief in emotional residue is measured implicitly, both Indians and American believe to a similar extent that emotional residue influences the moods and behaviors of those who come into contact with it (Studies 5-7). Both Indians and Americans also believe that closer relationships and a larger number of people yield more detectable residue (Study 8). Finally, Study 9 demonstrated that beliefs about emotional residue can influence people's behaviors. Together, these finding suggest that emotional residue is likely to be an intuitive concept, one that people in different cultures acquire even without explicit instruction.",
"title": ""
},
{
"docid": "037c6208dd71882a870bd8c5a0eb64bc",
"text": "Off-policy learning is key to scaling up reinforcement learning as it allows to learn about a target policy from the experience generated by a different behavior policy. Unfortunately, it has been challenging to combine off-policy learning with function approximation and multi-step bootstrapping in a way that leads to both stable and efficient algorithms. In this work, we show that the TREE BACKUP and RETRACE algorithms are unstable with linear function approximation, both in theory and in practice with specific examples. Based on our analysis, we then derive stable and efficient gradient-based algorithms using a quadratic convex-concave saddle-point formulation. By exploiting the problem structure proper to these algorithms, we are able to provide convergence guarantees and finite-sample bounds. The applicability of our new analysis also goes beyond TREE BACKUP and RETRACE and allows us to provide new convergence rates for the GTD and GTD2 algorithms without having recourse to projections or Polyak averaging.",
"title": ""
},
{
"docid": "4438015370e500c4bcdc347b3e332538",
"text": "This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case.",
"title": ""
},
{
"docid": "16c6e41746c451d66b43c5736f622cda",
"text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.",
"title": ""
},
{
"docid": "33c38bd7444164fb1539da573da3db25",
"text": "Axial endplay problems often occur in electrical machines even in the conventional skew motor. In order to solve these problems, an improved skew rotor is proposed to weaken the ill-effect of the conventional skew motor by skewing the slots toward reverse directions. The space distributions of magnetic flux field and the Maxwell stress tensor on the rotor surfaces are analyzed by an analytical method. The time-step finite-element 3-D whole model of a novel skew squirrel-cage induction motor is presented for verification. The results indicate that the radial and the axial forces decrease, but the rotary torque remains unchanged. The validity of the improved method is verified by means of the comparison with the conventional one.",
"title": ""
},
{
"docid": "05eb1af3e6838640b6dc5c1c128cc78a",
"text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.",
"title": ""
},
{
"docid": "162a4cab1ea0bd1e9b8980a57df7c2bf",
"text": "This paper investigates the design of power and spectrally efficient coded modulations based on amplitude phase shift keying (APSK) with application to broadband satellite communications. Emphasis is put on 64APSK constellations. The APSK modulation has merits for digital transmission over nonlinear satellite channels due to its power and spectral efficiency combined with its inherent robustness against nonlinear distortion. This scheme has been adopted in the DVB-S2 Standard for satellite digital video broadcasting. Assuming an ideal rectangular transmission pulse, for which no nonlinear inter-symbol interference is present and perfect pre-compensation of the nonlinearity takes place, we optimize the 64APSK constellation design by employing an optimization criterion based on the mutual information. This method generates an optimum constellation for each operating SNR point, that is, for each spectral efficiency. Two separate cases of interest are particularly examined: (i) the equiprobable case, where all constellation points are equiprobable and (ii) the non-equiprobable case, where the constellation points on each ring are assumed to be equiprobable but the a priory symbol probability associated per ring is assumed different for each ring. Following the mutual information-based optimization approach in each case, detailed simulation results are obtained for the optimal 64APSK constellation settings as well as the achievable shaping gain.",
"title": ""
},
{
"docid": "8f1cb692121899bb63e98f9a6ab3000e",
"text": "Magnet material prices has become an uncertain factor for electric machine development. Most of all, the output of ironless axial flux motors equipped with Halbach magnet arrays depend on the elaborated magnetic flux. Therefore, possibilities to reduce the manufacturing cost without negatively affecting the performance are studied in this paper. Both magnetostatic and transient 3D finite element analyses are applied to compare flux density distribution, elaborated output torque and induced back EMF. It is shown, that the proposed magnet shapes and magnetization pattern meet the requirements. Together with the assembly and measurements of functional linear Halbach magnet arrays, the prerequisite for the manufacturing of axial magnet arrays for an ironless in-wheel hub motor are given.",
"title": ""
},
{
"docid": "9955e5a03700d432098be9118faebe61",
"text": "Enterprise Resource Planning (ERP) systems are integrated, enterprise-wide systems that provide automated support for standard business processes within organisations. They have been adopted by organisations throughout the world with varying degrees of success. Implementing ERP systems is a complex, lengthy and expensive process. In this paper we synthesise an ERP systems implementation process model and a set of critical success factors for ERP systems implementation. Two case studies of ERP systems implementation, one in Australia and one in China are reported. The case studies identify which critical success factors are important in which process model phases. Case study analysis then explains the differences between the Australian and Chinese cases using national cultural characteristics. Outcomes of the research are important for multinational organisations implementing ERP systems and for consulting companies assisting with ERP systems implementation in different countries.",
"title": ""
},
{
"docid": "6465daca71e18cb76ec5442fb94f625a",
"text": "In this paper, we show how an open-source, language-independent proofreading tool has been built. Many languages lack contextual proofreading tools; for many others, only partial solutions are available. Using existing, largely language-independent tools and collaborative processes it is possible to develop a practical style and grammar checker and to fight the digital divide in countries where commercial linguistic application software is unavailable or too expensive for average users. The described solution depends on relatively easily available language resources and does not require a fully formalized grammar nor a deep parser, yet it can detect many frequent context-dependent spelling mistakes, as well as grammatical, punctuation, usage, and stylistic errors. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "051c530bf9d49bf1066ddf856488dff1",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "0165273958cc8385d371024e89f87d15",
"text": "Traditional, persistent data-oriented approaches in computer forensics face some limitations regarding a number of technological developments, e.g., rapidly increasing storage capabilities of hard drives, memory-resident malicious software applications, or the growing use of encryption routines, that make an in-time investigation more and more difficult. In order to cope with these issues, security professionals have started to examine alternative data sources and emphasize the value of volatile system information in RAM more recently. In this paper, we give an overview of the prevailing techniques and methods to collect and analyze a computer's memory. We describe the characteristics, benefits, and drawbacks of the individual solutions and outline opportunities for future research in this evolving field of IT security. Highlights Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "a3a40060ee2b30047de3081315e70df6",
"text": "OBJECTIVE\nTo develop evidence-based recommendations for the management of fibromyalgia syndrome.\n\n\nMETHODS\nA multidisciplinary task force was formed representing 11 European countries. The design of the study, including search strategy, participants, interventions, outcome measures, data collection and analytical method, was defined at the outset. A systematic review was undertaken with the keywords \"fibromyalgia\", \"treatment or management\" and \"trial\". Studies were excluded if they did not utilise the American College of Rheumatology classification criteria, were not clinical trials, or included patients with chronic fatigue syndrome or myalgic encephalomyelitis. Primary outcome measures were change in pain assessed by visual analogue scale and fibromyalgia impact questionnaire. The quality of the studies was categorised based on randomisation, blinding and allocation concealment. Only the highest quality studies were used to base recommendations on. When there was insufficient evidence from the literature, a Delphi process was used to provide basis for recommendation.\n\n\nRESULTS\n146 studies were eligible for the review. 39 pharmacological intervention studies and 59 non-pharmacological were included in the final recommendation summary tables once those of a lower quality or with insufficient data were separated. The categories of treatment identified were antidepressants, analgesics, and \"other pharmacological\" and exercise, cognitive behavioural therapy, education, dietary interventions and \"other non-pharmacological\". In many studies sample size was small and the quality of the study was insufficient for strong recommendations to be made.\n\n\nCONCLUSIONS\nNine recommendations for the management of fibromyalgia syndrome were developed using a systematic review and expert consensus.",
"title": ""
},
{
"docid": "f584b2d89bacacf31158496460d6f546",
"text": "Significant advances in clinical practice as well as basic and translational science were presented at the American Transplant Congress this year. Topics included innovative clinical trials to recent advances in our basic understanding of the scientific underpinnings of transplant immunology. Key areas of interest included the following: clinical trials utilizing hepatitis C virus-positive (HCV+ ) donors for HCV- recipients, the impact of the new allocation policies, normothermic perfusion, novel treatments for desensitization, attempts at precision medicine, advances in xenotransplantation, the role of mitochondria and exosomes in rejection, nanomedicine, and the impact of the microbiota on transplant outcomes. This review highlights some of the most interesting and noteworthy presentations from the meeting.",
"title": ""
},
{
"docid": "8aaa4ab4879ad55f43114cf8a0bd3855",
"text": "Photo-based activity on social networking sites has recently been identified as contributing to body image concerns. The present study aimed to investigate experimentally the effect of number of likes accompanying Instagram images on women's own body dissatisfaction. Participants were 220 female undergraduate students who were randomly assigned to view a set of thin-ideal or average images paired with a low or high number of likes presented in an Instagram frame. Results showed that exposure to thin-ideal images led to greater body and facial dissatisfaction than average images. While the number of likes had no effect on body dissatisfaction or appearance comparison, it had a positive effect on facial dissatisfaction. These effects were not moderated by Instagram involvement, but greater investment in Instagram likes was associated with more appearance comparison and facial dissatisfaction. The results illustrate how the uniquely social interactional aspects of social media (e.g., likes) can affect body image.",
"title": ""
},
{
"docid": "57f3bb106406bf6a6f37dd7d7a8c7ef9",
"text": "Finding new uses for existing drugs, or drug repositioning, has been used as a strategy for decades to get drugs to more patients. As the ability to measure molecules in high-throughput ways has improved over the past decade, it is logical that such data might be useful for enabling drug repositioning through computational methods. Many computational predictions for new indications have been borne out in cellular model systems, though extensive animal model and clinical trial-based validation are still pending. In this review, we show that computational methods for drug repositioning can be classified in two axes: drug based, where discovery initiates from the chemical perspective, or disease based, where discovery initiates from the clinical perspective of disease or its pathology. Newer algorithms for computational drug repositioning will likely span these two axes, will take advantage of newer types of molecular measurements, and will certainly play a role in reducing the global burden of disease.",
"title": ""
},
{
"docid": "acdcbe0db79f9d822acacd83c0e91b01",
"text": "In this paper, we consider how speech interfaces can be combined with a direct manipulation interface to virtual reality. We outline the beneets of adding a speech interface, the requirements it imposes on speech recognition, language processing and interaction design. We describe the multimodal DIVERSE system which provides a speech interface to virtual worlds modelled in DIVE. This system can be extended to provide better management of the interaction as well as innovative functionality which allow users to talk directly to agents in the virtual world.",
"title": ""
}
] |
scidocsrr
|
4eee6c993e6ec33f8606545d4e9fa0b5
|
The role of human ventral visual cortex in motion perception
|
[
{
"docid": "0e00972207bfcb02e18fded1b3408214",
"text": "Accumulating neuropsychological, electrophysiological and behavioural evidence suggests that the neural substrates of visual perception may be quite distinct from those underlying the visual control of actions. In other words, the set of object descriptions that permit identification and recognition may be computed independently of the set of descriptions that allow an observer to shape the hand appropriately to pick up an object. We propose that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the striate cortex to the posterior parietal region mediates the required sensorimotor transformations for visually guided actions directed at such objects.",
"title": ""
}
] |
[
{
"docid": "b1a440cb894c1a76373bdbf7ff84318d",
"text": "We present a language-theoretic approach to symbolic model checking of PCTL over discrete-time Markov chains. The probability with which a path formula is satisfied is represented by a regular expression. A recursive evaluation of the regular expression yields an exact rational value when transition probabilities are rational, and rational functions when some probabilities are left unspecified as parameters of the system. This allows for parametric model checking by evaluating the regular expression for different parameter values, for instance, to study the influence of a lossy channel in the overall reliability of a randomized protocol.",
"title": ""
},
{
"docid": "391f9b889b1c3ffe3e8ee422d108edcd",
"text": "Does the brain of a bilingual process language differently from that of a monolingual? We compared how bilinguals and monolinguals recruit classic language brain areas in response to a language task and asked whether there is a neural signature of bilingualism. Highly proficient and early-exposed adult Spanish-English bilinguals and English monolinguals participated. During functional magnetic resonance imaging (fMRI), participants completed a syntactic sentence judgment task [Caplan, D., Alpert, N., & Waters, G. Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541552, 1998]. The sentences exploited differences between Spanish and English linguistic properties, allowing us to explore similarities and differences in behavioral and neural responses between bilinguals and monolinguals, and between a bilingual's two languages. If bilinguals' neural processing differs across their two languages, then differential behavioral and neural patterns should be observed in Spanish and English. Results show that behaviorally, in English, bilinguals and monolinguals had the same speed and accuracy, yet, as predicted from the Spanish-English structural differences, bilinguals had a different pattern of performance in Spanish. fMRI analyses revealed that both monolinguals (in one language) and bilinguals (in each language) showed predicted increases in activation in classic language areas (e.g., left inferior frontal cortex, LIFC), with any neural differences between the bilingual's two languages being principled and predictable based on the morphosyntactic differences between Spanish and English. However, an important difference was that bilinguals had a significantly greater increase in the blood oxygenation level-dependent signal in the LIFC (BA 45) when processing English than the English monolinguals. The results provide insight into the decades-old question about the degree of separation of bilinguals' dual-language representation. The differential activation for bilinguals and monolinguals opens the question as to whether there may possibly be a neural signature of bilingualism. Differential activation may further provide a fascinating window into the language processing potential not recruited in monolingual brains and reveal the biological extent of the neural architecture underlying all human language.",
"title": ""
},
{
"docid": "82c4aa6bc189e011556ca7aa6d1688b9",
"text": "Two aspects of children’s early gender development the spontaneous production of gender labels and sex-typed play were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children’s gender labeling as based on mothers’ biweekly reports on their children’s language from 9 through 21 months. Videotapes of children’s play both alone and with mother at 17 and 21 months were independently analyzed for play with gender stereotyped and neutral toys. Finally, the relation between gender labeling and sex-typed play was examined. Children transitioned to using gender labels at approximately 19 months on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in sex-typed play, suggesting that knowledge of gender categories might influence sex-typing before the age of 2.",
"title": ""
},
{
"docid": "056f5179fa5c0cdea06d29d22a756086",
"text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017",
"title": ""
},
{
"docid": "9ce0f142369a3ceafe7d9a074ff7a5e8",
"text": "This paper uses a mixed-methods approach to examine the relation between online academic disclosure and academic performance. A multi-ethnic sample of college students (N = 261; male = 66; female = 195; M age 22 years) responded to open-ended questions about their Facebook use. Thematic analysis revealed that over 14% of the Facebook wall posts/status updates (N = 714) contained academic themes; positive states were more frequent than negative and neutral states and students with lower GPAs expressed negative states more often. A path analysis suggested that academic performance may determine college students’ Facebook use, rather than the reverse. Implications for student support services",
"title": ""
},
{
"docid": "f63503eb721aa7c1fd6b893c2c955fdf",
"text": "In 2008, financial tsunami started to impair the economic development of many countries, including Taiwan. The prediction of financial crisis turns to be much more important and doubtlessly holds public attention when the world economy goes to depression. This study examined the predictive ability of the four most commonly used financial distress prediction models and thus constructed reliable failure prediction models for public industrial firms in Taiwan. Multiple discriminate analysis (MDA), logit, probit, and artificial neural networks (ANNs) methodology were employed to a dataset of matched sample of failed and non-failed Taiwan public industrial firms during 1998–2005. The final models are validated using within sample test and out-of-the-sample test, respectively. The results indicated that the probit, logit, and ANN models which used in this study achieve higher prediction accuracy and possess the ability of generalization. The probit model possesses the best and stable performance. However, if the data does not satisfy the assumptions of the statistical approach, then the ANN approach would demonstrate its advantage and achieve higher prediction accuracy. In addition, the models which used in this study achieve higher prediction accuracy and possess the ability of generalization than those of [Altman, Financial ratios—discriminant analysis and the prediction of corporate bankruptcy using capital market data, Journal of Finance 23 (4) (1968) 589–609, Ohlson, Financial ratios and the probability prediction of bankruptcy, Journal of Accounting Research 18 (1) (1980) 109–131, and Zmijewski, Methodological issues related to the estimation of financial distress prediction models, Journal of Accounting Research 22 (1984) 59–82]. In summary, the models used in this study can be used to assist investors, creditors, managers, auditors, and regulatory agencies in Taiwan to predict the probability of business failure. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3831c1b7b1679f6e158d6a17e47df122",
"text": "Social media platforms provide an inexpensive communication medium that allows anyone to quickly reach millions of users. Consequently, in these platforms anyone can publish content and anyone interested in the content can obtain it, representing a transformative revolution in our society. However, this same potential of social media systems brings together an important challenge---these systems provide space for discourses that are harmful to certain groups of people. This challenge manifests itself with a number of variations, including bullying, offensive content, and hate speech. Specifically, authorities of many countries today are rapidly recognizing hate speech as a serious problem, specially because it is hard to create barriers on the Internet to prevent the dissemination of hate across countries or minorities. In this paper, we provide the first of a kind systematic large scale measurement and analysis study of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.",
"title": ""
},
{
"docid": "96b270cf4799d041217ee3e071383ab1",
"text": "Cluster analysis has been widely used in several disciplines, such as statistics, software engineering, biology, psychology and other social sciences, in order to identify natural groups in large amounts of data. Clustering has also been widely adopted by researchers within computer science and especially the database community. K-means is the most famous clustering algorithms. In this paper, the performance of basic k means algorithm is evaluated using various distance metrics for iris dataset, wine dataset, vowel dataset, ionosphere dataset and crude oil dataset by varying no of clusters. From the result analysis we can conclude that the performance of k means algorithm is based on the distance metrics for selected database. Thus, this work will help to select suitable distance metric for particular application.",
"title": ""
},
{
"docid": "583623f15d855131d190fcef37839999",
"text": "Service providers want to reduce datacenter costs by consolidating workloads onto fewer servers. At the same time, customers have performance goals, such as meeting tail latency Service Level Objectives (SLOs). Consolidating workloads while meeting tail latency goals is challenging, especially since workloads in production environments are often bursty. To limit the congestion when consolidating workloads, customers and service providers often agree upon rate limits. Ideally, rate limits are chosen to maximize the number of workloads that can be co-located while meeting each workload's SLO. In reality, neither the service provider nor customer knows how to choose rate limits. Customers end up selecting rate limits on their own in some ad hoc fashion, and service providers are left to optimize given the chosen rate limits.\n This paper describes WorkloadCompactor, a new system that uses workload traces to automatically choose rate limits simultaneously with selecting onto which server to place workloads. Our system meets customer tail latency SLOs while minimizing datacenter resource costs. Our experiments show that by optimizing the choice of rate limits, WorkloadCompactor reduces the number of required servers by 30--60% as compared to state-of-the-art approaches.",
"title": ""
},
{
"docid": "3abcfd48703b399404126996ca837f90",
"text": "Various inductive loads used in all industries deals with the problem of power factor improvement. Capacitor bank connected in shunt helps in maintaining the power factor closer to unity. They improve the electrical supply quality and increase the efficiency of the system. Also the line losses are also reduced. Shunt capacitor banks are less costly and can be installed anywhere. This paper deals with shunt capacitor bank designing for power factor improvement considering overvoltages for substation installation. Keywords— Capacitor Bank, Overvoltage Consideration, Power Factor, Reactive Power",
"title": ""
},
{
"docid": "2a67a524cb3279967207b1fa8748cd04",
"text": "Recent work in Information Retrieval (IR) using Deep Learning models has yielded state of the art results on a variety of IR tasks. Deep neural networks (DNN) are capable of learning ideal representations of data during the training process, removing the need for independently extracting features. However, the structures of these DNNs are often tailored to perform on specific datasets. In addition, IR tasks deal with text at varying levels of granularity from single factoids to documents containing thousands of words. In this paper, we examine the role of the granularity on the performance of common state of the art DNN structures in IR.",
"title": ""
},
{
"docid": "95ee34da123289b9c538471844e39d8c",
"text": "Population-level analyses often use average quantities to describe heterogeneous systems, particularly when variation does not arise from identifiable groups. A prominent example, central to our current understanding of epidemic spread, is the basic reproductive number, R0, which is defined as the mean number of infections caused by an infected individual in a susceptible population. Population estimates of R0 can obscure considerable individual variation in infectiousness, as highlighted during the global emergence of severe acute respiratory syndrome (SARS) by numerous ‘superspreading events’ in which certain individuals infected unusually large numbers of secondary cases. For diseases transmitted by non-sexual direct contacts, such as SARS or smallpox, individual variation is difficult to measure empirically, and thus its importance for outbreak dynamics has been unclear. Here we present an integrated theoretical and statistical analysis of the influence of individual variation in infectiousness on disease emergence. Using contact tracing data from eight directly transmitted diseases, we show that the distribution of individual infectiousness around R0 is often highly skewed. Model predictions accounting for this variation differ sharply from average-based approaches, with disease extinction more likely and outbreaks rarer but more explosive. Using these models, we explore implications for outbreak control, showing that individual-specific control measures outperform population-wide measures. Moreover, the dramatic improvements achieved through targeted control policies emphasize the need to identify predictive correlates of higher infectiousness. Our findings indicate that superspreading is a normal feature of disease spread, and to frame ongoing discussion we propose a rigorous definition for superspreading events and a method to predict their frequency.",
"title": ""
},
{
"docid": "385fc1f02645d4d636869317cde6d35e",
"text": "Events and their coreference offer useful semantic and discourse resources. We show that the semantic and discourse aspects of events interact with each other. However, traditional approaches addressed event extraction and event coreference resolution either separately or sequentially, which limits their interactions. This paper proposes a document-level structured learning model that simultaneously identifies event triggers and resolves event coreference. We demonstrate that the joint model outperforms a pipelined model by 6.9 BLANC F1 and 1.8 CoNLL F1 points in event coreference resolution using a corpus in the biology domain.",
"title": ""
},
{
"docid": "4f22574e4397c24663a6bbf1ff6a97ed",
"text": "Systemic elastorrhexis is a multisystem genetic disorder characterised by dystrophic mineralization of soft connective tissues in a number of organs, including the skin, the eyes and the arterial blood vessels. Although the eye and skin findings have for years attracted the attention of ophthalmologists and dermatologists, the systemic nature of the disorder has not received sufficient attention among internists and many patients with this disorder have undoubtedly been unrecognized. We reported a case of systemic elastorrhexis redressing the diagnosis of vascular leucoencephalopathy of an unknown aetiology for many years.",
"title": ""
},
{
"docid": "2b942943bebdc891a4c9fa0f4ac65a4b",
"text": "A new architecture based on the Multi-channel Convolutional Neural Network (MCCNN) is proposed for recognizing facial expressions. Two hard-coded feature extractors are replaced by a single channel which is partially trained in an unsupervised fashion as a Convolutional Autoencoder (CAE). One additional channel that contains a standard CNN is left unchanged. Information from both channels converges in a fully connected layer and is then used for classification. We perform two distinct experiments on the JAFFE dataset (leave-one-out and ten-fold cross validation) to evaluate our architecture. Our comparison with the previous model that uses hard-coded Sobel features shows that an additional channel of information with unsupervised learning can significantly boost accuracy and reduce the overall training time. Furthermore, experimental results are compared with benchmarks from the literature showing that our method provides state-of-the-art recognition rates for facial expressions. Our method outperforms previously published methods that used hand-crafted features by a large margin.",
"title": ""
},
{
"docid": "f5ea6cbf85b375c920283666657fe24d",
"text": "The link, if any, between creativity and mental illness is one of the most controversial topics in modern creativity research. The present research assessed the relationships between anxiety and depression symptom dimensions and several facets of creativity: divergent thinking, creative self-concepts, everyday creative behaviors, and creative accomplishments. Latent variable models estimated effect sizes and their confidence intervals. Overall, measures of anxiety, depression, and social anxiety predicted little variance in creativity. Few models explained more than 3% of the variance, and the effect sizes were small and inconsistent in direction.",
"title": ""
},
{
"docid": "3f831b7881e2044d1c3a7ac08f9f0047",
"text": "Although the refinement of laboresque technologies that save farm labor continues, its boom (in terms of sheer number of machines) passed during the quarter-century lifetime of Technology in Society. Instead, landesque technology, which spares land, holds the spotlight. Landesque is exemplified by high-yielding varieties, the Green Revolution, and genetically modified organisms. The contribution of landesque technologies to national performance can be charted on a plane with the dual dimensions of sustainability: 1) present need and 2) environmental impact. In the dimension of need, national crop production has increased. In the dimension of environmental impact, landesque technology plus consumption that increases more slowly than income has countered population and wealth to steer national journeys toward sustainability. On the sustainability plane, the genius to discover new landesque technology and the courage to apply it can steer nations toward still greater production without veering toward higher impact. # 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "087b1951ec35db6de6f4739404277913",
"text": "A possible scenario for the evolution of Television Broadcast is the adoption of 8 K resolution video broadcasting. To achieve the required bit-rates MIMO technologies are an actual candidate. In this scenario, this paper collected electric field levels from a MIMO experimental system for TV broadcasting to tune the parameters of the ITU-R P.1546 propagation model, which has been employed to model VHF and UHF broadcast channels. The parameters are tuned for each polarization alone and for both together. This is done considering multiple reception points and also a larger capturing time interval for a fixed reception site. Significant improvements on the match between the actual and measured link budget are provided by the optimized parameters.",
"title": ""
},
{
"docid": "695a0e8ba9556afde6b22f29399616ba",
"text": "Microstrip lines (MSL) are widely used in microwave systems because of its low cost, light weight, and easy integration with other components. Substrate integrated waveguides (SIW), which inherit the advantages from traditional rectangular waveguides without their bulky configuration, aroused recently in low loss and high power planar applications. This chapter proposed the design and modeling of transitions between these two common structures. Research motives will be described firstly in the next subsection, followed by a literature survey on the proposed MSL to SIW transition structures. Outlines of the following sections in this chapter will also be given in the end of this section.",
"title": ""
},
{
"docid": "667ab06617b7f51896175692e21c11f0",
"text": "ÐThis is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective. Index TermsÐInformation visualization, graph visualization, graph drawing, navigation, focus+context, fish-eye, clustering.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.